00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 174 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3675 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.030 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.031 The recommended git tool is: git 00:00:00.032 using credential 00000000-0000-0000-0000-000000000002 00:00:00.035 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.047 Fetching changes from the remote Git repository 00:00:00.053 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.069 Using shallow fetch with depth 1 00:00:00.069 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.069 > git --version # timeout=10 00:00:00.083 > git --version # 'git version 2.39.2' 00:00:00.083 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.102 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.102 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.348 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.360 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.371 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.371 > git config core.sparsecheckout # timeout=10 00:00:02.382 > git read-tree -mu HEAD # timeout=10 00:00:02.397 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.419 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.419 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.645 [Pipeline] Start of Pipeline 00:00:02.663 [Pipeline] library 00:00:02.665 Loading library shm_lib@master 00:00:02.665 Library shm_lib@master is cached. Copying from home. 00:00:02.686 [Pipeline] node 00:00:02.701 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.702 [Pipeline] { 00:00:02.713 [Pipeline] catchError 00:00:02.714 [Pipeline] { 00:00:02.752 [Pipeline] wrap 00:00:02.762 [Pipeline] { 00:00:02.771 [Pipeline] stage 00:00:02.773 [Pipeline] { (Prologue) 00:00:02.793 [Pipeline] echo 00:00:02.795 Node: VM-host-WFP7 00:00:02.802 [Pipeline] cleanWs 00:00:02.813 [WS-CLEANUP] Deleting project workspace... 00:00:02.813 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.822 [WS-CLEANUP] done 00:00:03.007 [Pipeline] setCustomBuildProperty 00:00:03.093 [Pipeline] httpRequest 00:00:03.418 [Pipeline] echo 00:00:03.420 Sorcerer 10.211.164.20 is alive 00:00:03.430 [Pipeline] retry 00:00:03.432 [Pipeline] { 00:00:03.446 [Pipeline] httpRequest 00:00:03.451 HttpMethod: GET 00:00:03.452 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.452 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.453 Response Code: HTTP/1.1 200 OK 00:00:03.453 Success: Status code 200 is in the accepted range: 200,404 00:00:03.454 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.599 [Pipeline] } 00:00:03.613 [Pipeline] // retry 00:00:03.619 [Pipeline] sh 00:00:03.905 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.920 [Pipeline] httpRequest 00:00:04.321 [Pipeline] echo 00:00:04.323 Sorcerer 10.211.164.20 is alive 00:00:04.331 [Pipeline] retry 00:00:04.333 [Pipeline] { 00:00:04.344 [Pipeline] httpRequest 00:00:04.349 HttpMethod: GET 00:00:04.350 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:04.351 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:04.351 Response Code: HTTP/1.1 200 OK 00:00:04.352 Success: Status code 200 is in the accepted range: 200,404 00:00:04.352 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:25.481 [Pipeline] } 00:00:25.493 [Pipeline] // retry 00:00:25.500 [Pipeline] sh 00:00:25.786 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:28.341 [Pipeline] sh 00:00:28.624 + git -C spdk log --oneline -n5 00:00:28.624 b18e1bd62 version: v24.09.1-pre 00:00:28.624 19524ad45 version: v24.09 00:00:28.624 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:00:28.624 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:00:28.624 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:00:28.644 [Pipeline] withCredentials 00:00:28.656 > git --version # timeout=10 00:00:28.669 > git --version # 'git version 2.39.2' 00:00:28.686 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:28.688 [Pipeline] { 00:00:28.697 [Pipeline] retry 00:00:28.699 [Pipeline] { 00:00:28.714 [Pipeline] sh 00:00:28.998 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:29.271 [Pipeline] } 00:00:29.290 [Pipeline] // retry 00:00:29.295 [Pipeline] } 00:00:29.313 [Pipeline] // withCredentials 00:00:29.323 [Pipeline] httpRequest 00:00:29.719 [Pipeline] echo 00:00:29.721 Sorcerer 10.211.164.20 is alive 00:00:29.731 [Pipeline] retry 00:00:29.733 [Pipeline] { 00:00:29.748 [Pipeline] httpRequest 00:00:29.754 HttpMethod: GET 00:00:29.755 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:29.756 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:29.768 Response Code: HTTP/1.1 200 OK 00:00:29.769 Success: Status code 200 is in the accepted range: 200,404 00:00:29.770 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:04.337 [Pipeline] } 00:01:04.361 [Pipeline] // retry 00:01:04.372 [Pipeline] sh 00:01:04.663 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:06.057 [Pipeline] sh 00:01:06.342 + git -C dpdk log --oneline -n5 00:01:06.342 eeb0605f11 version: 23.11.0 00:01:06.342 238778122a doc: update release notes for 23.11 00:01:06.342 46aa6b3cfc doc: fix description of RSS features 00:01:06.342 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:06.342 7e421ae345 devtools: support skipping forbid rule check 00:01:06.360 [Pipeline] writeFile 00:01:06.375 [Pipeline] sh 00:01:06.662 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:06.675 [Pipeline] sh 00:01:06.960 + cat autorun-spdk.conf 00:01:06.960 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.960 SPDK_RUN_ASAN=1 00:01:06.960 SPDK_RUN_UBSAN=1 00:01:06.960 SPDK_TEST_RAID=1 00:01:06.960 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:06.960 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:06.961 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:06.969 RUN_NIGHTLY=1 00:01:06.971 [Pipeline] } 00:01:06.985 [Pipeline] // stage 00:01:07.001 [Pipeline] stage 00:01:07.003 [Pipeline] { (Run VM) 00:01:07.016 [Pipeline] sh 00:01:07.303 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:07.303 + echo 'Start stage prepare_nvme.sh' 00:01:07.303 Start stage prepare_nvme.sh 00:01:07.303 + [[ -n 1 ]] 00:01:07.303 + disk_prefix=ex1 00:01:07.303 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:07.303 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:07.303 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:07.304 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.304 ++ SPDK_RUN_ASAN=1 00:01:07.304 ++ SPDK_RUN_UBSAN=1 00:01:07.304 ++ SPDK_TEST_RAID=1 00:01:07.304 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:07.304 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:07.304 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:07.304 ++ RUN_NIGHTLY=1 00:01:07.304 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:07.304 + nvme_files=() 00:01:07.304 + declare -A nvme_files 00:01:07.304 + backend_dir=/var/lib/libvirt/images/backends 00:01:07.304 + nvme_files['nvme.img']=5G 00:01:07.304 + nvme_files['nvme-cmb.img']=5G 00:01:07.304 + nvme_files['nvme-multi0.img']=4G 00:01:07.304 + nvme_files['nvme-multi1.img']=4G 00:01:07.304 + nvme_files['nvme-multi2.img']=4G 00:01:07.304 + nvme_files['nvme-openstack.img']=8G 00:01:07.304 + nvme_files['nvme-zns.img']=5G 00:01:07.304 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:07.304 + (( SPDK_TEST_FTL == 1 )) 00:01:07.304 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:07.304 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:07.304 + for nvme in "${!nvme_files[@]}" 00:01:07.304 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:07.304 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:07.304 + for nvme in "${!nvme_files[@]}" 00:01:07.304 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:07.304 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:07.304 + for nvme in "${!nvme_files[@]}" 00:01:07.304 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:07.304 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:07.304 + for nvme in "${!nvme_files[@]}" 00:01:07.304 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:07.304 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:07.304 + for nvme in "${!nvme_files[@]}" 00:01:07.304 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:07.304 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:07.304 + for nvme in "${!nvme_files[@]}" 00:01:07.304 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:07.304 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:07.304 + for nvme in "${!nvme_files[@]}" 00:01:07.304 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:07.564 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:07.564 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:07.564 + echo 'End stage prepare_nvme.sh' 00:01:07.564 End stage prepare_nvme.sh 00:01:07.578 [Pipeline] sh 00:01:07.863 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:07.863 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:01:07.863 00:01:07.863 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:07.863 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:07.863 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:07.863 HELP=0 00:01:07.863 DRY_RUN=0 00:01:07.863 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:01:07.863 NVME_DISKS_TYPE=nvme,nvme, 00:01:07.863 NVME_AUTO_CREATE=0 00:01:07.863 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:01:07.863 NVME_CMB=,, 00:01:07.863 NVME_PMR=,, 00:01:07.863 NVME_ZNS=,, 00:01:07.863 NVME_MS=,, 00:01:07.863 NVME_FDP=,, 00:01:07.863 SPDK_VAGRANT_DISTRO=fedora39 00:01:07.863 SPDK_VAGRANT_VMCPU=10 00:01:07.863 SPDK_VAGRANT_VMRAM=12288 00:01:07.863 SPDK_VAGRANT_PROVIDER=libvirt 00:01:07.863 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:07.863 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:07.863 SPDK_OPENSTACK_NETWORK=0 00:01:07.863 VAGRANT_PACKAGE_BOX=0 00:01:07.863 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:07.863 FORCE_DISTRO=true 00:01:07.863 VAGRANT_BOX_VERSION= 00:01:07.863 EXTRA_VAGRANTFILES= 00:01:07.863 NIC_MODEL=virtio 00:01:07.863 00:01:07.863 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:07.863 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:09.774 Bringing machine 'default' up with 'libvirt' provider... 00:01:10.346 ==> default: Creating image (snapshot of base box volume). 00:01:10.346 ==> default: Creating domain with the following settings... 00:01:10.346 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732810441_acb2ffc5dbb0ec50a33d 00:01:10.346 ==> default: -- Domain type: kvm 00:01:10.346 ==> default: -- Cpus: 10 00:01:10.347 ==> default: -- Feature: acpi 00:01:10.347 ==> default: -- Feature: apic 00:01:10.347 ==> default: -- Feature: pae 00:01:10.347 ==> default: -- Memory: 12288M 00:01:10.347 ==> default: -- Memory Backing: hugepages: 00:01:10.347 ==> default: -- Management MAC: 00:01:10.347 ==> default: -- Loader: 00:01:10.347 ==> default: -- Nvram: 00:01:10.347 ==> default: -- Base box: spdk/fedora39 00:01:10.347 ==> default: -- Storage pool: default 00:01:10.347 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732810441_acb2ffc5dbb0ec50a33d.img (20G) 00:01:10.347 ==> default: -- Volume Cache: default 00:01:10.347 ==> default: -- Kernel: 00:01:10.347 ==> default: -- Initrd: 00:01:10.347 ==> default: -- Graphics Type: vnc 00:01:10.347 ==> default: -- Graphics Port: -1 00:01:10.347 ==> default: -- Graphics IP: 127.0.0.1 00:01:10.347 ==> default: -- Graphics Password: Not defined 00:01:10.347 ==> default: -- Video Type: cirrus 00:01:10.347 ==> default: -- Video VRAM: 9216 00:01:10.347 ==> default: -- Sound Type: 00:01:10.347 ==> default: -- Keymap: en-us 00:01:10.347 ==> default: -- TPM Path: 00:01:10.347 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:10.347 ==> default: -- Command line args: 00:01:10.347 ==> default: -> value=-device, 00:01:10.347 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:10.347 ==> default: -> value=-drive, 00:01:10.347 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:10.347 ==> default: -> value=-device, 00:01:10.347 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.347 ==> default: -> value=-device, 00:01:10.347 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:10.347 ==> default: -> value=-drive, 00:01:10.347 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:10.347 ==> default: -> value=-device, 00:01:10.347 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.347 ==> default: -> value=-drive, 00:01:10.347 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:10.347 ==> default: -> value=-device, 00:01:10.347 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.347 ==> default: -> value=-drive, 00:01:10.347 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:10.347 ==> default: -> value=-device, 00:01:10.347 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.347 ==> default: Creating shared folders metadata... 00:01:10.607 ==> default: Starting domain. 00:01:12.518 ==> default: Waiting for domain to get an IP address... 00:01:30.625 ==> default: Waiting for SSH to become available... 00:01:30.625 ==> default: Configuring and enabling network interfaces... 00:01:35.911 default: SSH address: 192.168.121.92:22 00:01:35.911 default: SSH username: vagrant 00:01:35.911 default: SSH auth method: private key 00:01:38.453 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:46.589 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:51.873 ==> default: Mounting SSHFS shared folder... 00:01:54.412 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:54.412 ==> default: Checking Mount.. 00:01:55.792 ==> default: Folder Successfully Mounted! 00:01:55.792 ==> default: Running provisioner: file... 00:01:56.732 default: ~/.gitconfig => .gitconfig 00:01:57.303 00:01:57.303 SUCCESS! 00:01:57.303 00:01:57.303 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:57.303 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:57.303 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:57.303 00:01:57.312 [Pipeline] } 00:01:57.327 [Pipeline] // stage 00:01:57.334 [Pipeline] dir 00:01:57.335 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:57.336 [Pipeline] { 00:01:57.347 [Pipeline] catchError 00:01:57.349 [Pipeline] { 00:01:57.360 [Pipeline] sh 00:01:57.642 + vagrant ssh-config --host vagrant 00:01:57.642 + sed -ne /^Host/,$p 00:01:57.642 + tee ssh_conf 00:02:00.178 Host vagrant 00:02:00.178 HostName 192.168.121.92 00:02:00.178 User vagrant 00:02:00.178 Port 22 00:02:00.178 UserKnownHostsFile /dev/null 00:02:00.178 StrictHostKeyChecking no 00:02:00.178 PasswordAuthentication no 00:02:00.178 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:00.178 IdentitiesOnly yes 00:02:00.178 LogLevel FATAL 00:02:00.178 ForwardAgent yes 00:02:00.178 ForwardX11 yes 00:02:00.178 00:02:00.193 [Pipeline] withEnv 00:02:00.196 [Pipeline] { 00:02:00.213 [Pipeline] sh 00:02:00.502 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:00.502 source /etc/os-release 00:02:00.502 [[ -e /image.version ]] && img=$(< /image.version) 00:02:00.502 # Minimal, systemd-like check. 00:02:00.502 if [[ -e /.dockerenv ]]; then 00:02:00.502 # Clear garbage from the node's name: 00:02:00.502 # agt-er_autotest_547-896 -> autotest_547-896 00:02:00.502 # $HOSTNAME is the actual container id 00:02:00.502 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:00.502 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:00.502 # We can assume this is a mount from a host where container is running, 00:02:00.502 # so fetch its hostname to easily identify the target swarm worker. 00:02:00.502 container="$(< /etc/hostname) ($agent)" 00:02:00.502 else 00:02:00.502 # Fallback 00:02:00.502 container=$agent 00:02:00.502 fi 00:02:00.502 fi 00:02:00.502 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:00.502 00:02:00.775 [Pipeline] } 00:02:00.790 [Pipeline] // withEnv 00:02:00.798 [Pipeline] setCustomBuildProperty 00:02:00.813 [Pipeline] stage 00:02:00.816 [Pipeline] { (Tests) 00:02:00.835 [Pipeline] sh 00:02:01.118 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:01.392 [Pipeline] sh 00:02:01.675 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:01.951 [Pipeline] timeout 00:02:01.951 Timeout set to expire in 1 hr 30 min 00:02:01.953 [Pipeline] { 00:02:01.967 [Pipeline] sh 00:02:02.251 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:02.822 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:02.835 [Pipeline] sh 00:02:03.120 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:03.395 [Pipeline] sh 00:02:03.678 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:03.955 [Pipeline] sh 00:02:04.239 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:04.508 ++ readlink -f spdk_repo 00:02:04.508 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:04.508 + [[ -n /home/vagrant/spdk_repo ]] 00:02:04.508 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:04.508 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:04.508 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:04.508 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:04.508 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:04.508 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:04.508 + cd /home/vagrant/spdk_repo 00:02:04.508 + source /etc/os-release 00:02:04.508 ++ NAME='Fedora Linux' 00:02:04.508 ++ VERSION='39 (Cloud Edition)' 00:02:04.508 ++ ID=fedora 00:02:04.508 ++ VERSION_ID=39 00:02:04.508 ++ VERSION_CODENAME= 00:02:04.508 ++ PLATFORM_ID=platform:f39 00:02:04.508 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:04.508 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:04.508 ++ LOGO=fedora-logo-icon 00:02:04.508 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:04.508 ++ HOME_URL=https://fedoraproject.org/ 00:02:04.508 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:04.508 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:04.508 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:04.508 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:04.508 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:04.508 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:04.508 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:04.508 ++ SUPPORT_END=2024-11-12 00:02:04.508 ++ VARIANT='Cloud Edition' 00:02:04.508 ++ VARIANT_ID=cloud 00:02:04.508 + uname -a 00:02:04.508 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:04.508 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:05.118 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:05.118 Hugepages 00:02:05.118 node hugesize free / total 00:02:05.118 node0 1048576kB 0 / 0 00:02:05.118 node0 2048kB 0 / 0 00:02:05.118 00:02:05.118 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:05.118 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:05.118 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:05.118 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:05.118 + rm -f /tmp/spdk-ld-path 00:02:05.118 + source autorun-spdk.conf 00:02:05.118 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.118 ++ SPDK_RUN_ASAN=1 00:02:05.118 ++ SPDK_RUN_UBSAN=1 00:02:05.118 ++ SPDK_TEST_RAID=1 00:02:05.118 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:05.118 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:05.118 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.118 ++ RUN_NIGHTLY=1 00:02:05.118 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:05.118 + [[ -n '' ]] 00:02:05.118 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:05.118 + for M in /var/spdk/build-*-manifest.txt 00:02:05.118 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:05.118 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:05.118 + for M in /var/spdk/build-*-manifest.txt 00:02:05.118 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:05.118 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:05.118 + for M in /var/spdk/build-*-manifest.txt 00:02:05.118 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:05.119 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:05.119 ++ uname 00:02:05.119 + [[ Linux == \L\i\n\u\x ]] 00:02:05.119 + sudo dmesg -T 00:02:05.392 + sudo dmesg --clear 00:02:05.392 + dmesg_pid=6164 00:02:05.392 + [[ Fedora Linux == FreeBSD ]] 00:02:05.392 + sudo dmesg -Tw 00:02:05.392 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.392 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.392 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:05.392 + [[ -x /usr/src/fio-static/fio ]] 00:02:05.392 + export FIO_BIN=/usr/src/fio-static/fio 00:02:05.392 + FIO_BIN=/usr/src/fio-static/fio 00:02:05.392 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:05.392 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:05.392 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:05.392 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.392 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.392 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:05.392 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.392 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.392 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:05.392 Test configuration: 00:02:05.392 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.392 SPDK_RUN_ASAN=1 00:02:05.392 SPDK_RUN_UBSAN=1 00:02:05.392 SPDK_TEST_RAID=1 00:02:05.392 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:05.392 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:05.392 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.392 RUN_NIGHTLY=1 16:14:57 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:05.392 16:14:57 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:05.392 16:14:57 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:05.392 16:14:57 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:05.392 16:14:57 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:05.392 16:14:57 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:05.392 16:14:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.392 16:14:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.392 16:14:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.392 16:14:57 -- paths/export.sh@5 -- $ export PATH 00:02:05.392 16:14:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.392 16:14:57 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:05.392 16:14:57 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:05.392 16:14:57 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732810497.XXXXXX 00:02:05.392 16:14:57 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732810497.XNivWA 00:02:05.392 16:14:57 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:05.392 16:14:57 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:02:05.392 16:14:57 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:05.392 16:14:57 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:05.392 16:14:57 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:05.392 16:14:57 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:05.392 16:14:57 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:05.392 16:14:57 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:05.392 16:14:57 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.392 16:14:57 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:05.392 16:14:57 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:05.392 16:14:57 -- pm/common@17 -- $ local monitor 00:02:05.392 16:14:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.392 16:14:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.392 16:14:57 -- pm/common@25 -- $ sleep 1 00:02:05.392 16:14:57 -- pm/common@21 -- $ date +%s 00:02:05.392 16:14:57 -- pm/common@21 -- $ date +%s 00:02:05.392 16:14:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732810497 00:02:05.392 16:14:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732810497 00:02:05.652 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732810497_collect-cpu-load.pm.log 00:02:05.652 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732810497_collect-vmstat.pm.log 00:02:06.591 16:14:58 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:06.591 16:14:58 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:06.591 16:14:58 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:06.591 16:14:58 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:06.591 16:14:58 -- spdk/autobuild.sh@16 -- $ date -u 00:02:06.591 Thu Nov 28 04:14:58 PM UTC 2024 00:02:06.591 16:14:58 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:06.591 v24.09-1-gb18e1bd62 00:02:06.591 16:14:58 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:06.591 16:14:58 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:06.591 16:14:58 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:06.591 16:14:58 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:06.591 16:14:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.591 ************************************ 00:02:06.591 START TEST asan 00:02:06.591 ************************************ 00:02:06.591 using asan 00:02:06.591 16:14:58 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:06.591 00:02:06.591 real 0m0.001s 00:02:06.591 user 0m0.000s 00:02:06.591 sys 0m0.000s 00:02:06.591 16:14:58 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:06.591 16:14:58 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:06.591 ************************************ 00:02:06.591 END TEST asan 00:02:06.591 ************************************ 00:02:06.591 16:14:58 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:06.591 16:14:58 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:06.591 16:14:58 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:06.591 16:14:58 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:06.591 16:14:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.591 ************************************ 00:02:06.591 START TEST ubsan 00:02:06.591 ************************************ 00:02:06.591 using ubsan 00:02:06.591 16:14:58 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:06.591 00:02:06.591 real 0m0.001s 00:02:06.591 user 0m0.000s 00:02:06.591 sys 0m0.000s 00:02:06.591 16:14:58 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:06.591 16:14:58 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:06.591 ************************************ 00:02:06.591 END TEST ubsan 00:02:06.591 ************************************ 00:02:06.591 16:14:58 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:06.591 16:14:58 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:06.591 16:14:58 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:06.591 16:14:58 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:06.591 16:14:58 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:06.591 16:14:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.591 ************************************ 00:02:06.591 START TEST build_native_dpdk 00:02:06.591 ************************************ 00:02:06.591 16:14:58 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:06.591 eeb0605f11 version: 23.11.0 00:02:06.591 238778122a doc: update release notes for 23.11 00:02:06.591 46aa6b3cfc doc: fix description of RSS features 00:02:06.591 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:06.591 7e421ae345 devtools: support skipping forbid rule check 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:06.591 16:14:58 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:06.851 16:14:58 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:06.851 16:14:58 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:06.851 16:14:58 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:06.851 16:14:58 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:06.851 16:14:58 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:06.851 16:14:58 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:06.851 16:14:58 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:06.851 16:14:58 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:06.851 16:14:58 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:06.852 16:14:58 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:06.852 patching file config/rte_config.h 00:02:06.852 Hunk #1 succeeded at 60 (offset 1 line). 00:02:06.852 16:14:58 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:06.852 16:14:58 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:06.852 patching file lib/pcapng/rte_pcapng.c 00:02:06.852 16:14:58 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:06.852 16:14:58 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:06.852 16:14:58 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:06.852 16:14:58 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:06.852 16:14:58 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:06.852 16:14:58 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:06.852 16:14:58 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:13.424 The Meson build system 00:02:13.424 Version: 1.5.0 00:02:13.424 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:13.424 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:13.424 Build type: native build 00:02:13.424 Program cat found: YES (/usr/bin/cat) 00:02:13.424 Project name: DPDK 00:02:13.424 Project version: 23.11.0 00:02:13.424 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:13.424 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:13.424 Host machine cpu family: x86_64 00:02:13.424 Host machine cpu: x86_64 00:02:13.424 Message: ## Building in Developer Mode ## 00:02:13.424 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:13.424 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:13.424 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:13.424 Program python3 found: YES (/usr/bin/python3) 00:02:13.424 Program cat found: YES (/usr/bin/cat) 00:02:13.424 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:13.424 Compiler for C supports arguments -march=native: YES 00:02:13.424 Checking for size of "void *" : 8 00:02:13.424 Checking for size of "void *" : 8 (cached) 00:02:13.424 Library m found: YES 00:02:13.424 Library numa found: YES 00:02:13.424 Has header "numaif.h" : YES 00:02:13.424 Library fdt found: NO 00:02:13.424 Library execinfo found: NO 00:02:13.424 Has header "execinfo.h" : YES 00:02:13.424 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:13.424 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:13.424 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:13.424 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:13.424 Run-time dependency openssl found: YES 3.1.1 00:02:13.424 Run-time dependency libpcap found: YES 1.10.4 00:02:13.424 Has header "pcap.h" with dependency libpcap: YES 00:02:13.424 Compiler for C supports arguments -Wcast-qual: YES 00:02:13.424 Compiler for C supports arguments -Wdeprecated: YES 00:02:13.424 Compiler for C supports arguments -Wformat: YES 00:02:13.424 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:13.424 Compiler for C supports arguments -Wformat-security: NO 00:02:13.424 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:13.424 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:13.424 Compiler for C supports arguments -Wnested-externs: YES 00:02:13.424 Compiler for C supports arguments -Wold-style-definition: YES 00:02:13.424 Compiler for C supports arguments -Wpointer-arith: YES 00:02:13.424 Compiler for C supports arguments -Wsign-compare: YES 00:02:13.424 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:13.424 Compiler for C supports arguments -Wundef: YES 00:02:13.424 Compiler for C supports arguments -Wwrite-strings: YES 00:02:13.424 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:13.424 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:13.424 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:13.424 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:13.424 Program objdump found: YES (/usr/bin/objdump) 00:02:13.424 Compiler for C supports arguments -mavx512f: YES 00:02:13.424 Checking if "AVX512 checking" compiles: YES 00:02:13.424 Fetching value of define "__SSE4_2__" : 1 00:02:13.424 Fetching value of define "__AES__" : 1 00:02:13.424 Fetching value of define "__AVX__" : 1 00:02:13.424 Fetching value of define "__AVX2__" : 1 00:02:13.424 Fetching value of define "__AVX512BW__" : 1 00:02:13.424 Fetching value of define "__AVX512CD__" : 1 00:02:13.424 Fetching value of define "__AVX512DQ__" : 1 00:02:13.424 Fetching value of define "__AVX512F__" : 1 00:02:13.424 Fetching value of define "__AVX512VL__" : 1 00:02:13.424 Fetching value of define "__PCLMUL__" : 1 00:02:13.424 Fetching value of define "__RDRND__" : 1 00:02:13.424 Fetching value of define "__RDSEED__" : 1 00:02:13.424 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:13.424 Fetching value of define "__znver1__" : (undefined) 00:02:13.424 Fetching value of define "__znver2__" : (undefined) 00:02:13.424 Fetching value of define "__znver3__" : (undefined) 00:02:13.424 Fetching value of define "__znver4__" : (undefined) 00:02:13.424 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:13.424 Message: lib/log: Defining dependency "log" 00:02:13.424 Message: lib/kvargs: Defining dependency "kvargs" 00:02:13.424 Message: lib/telemetry: Defining dependency "telemetry" 00:02:13.424 Checking for function "getentropy" : NO 00:02:13.424 Message: lib/eal: Defining dependency "eal" 00:02:13.424 Message: lib/ring: Defining dependency "ring" 00:02:13.424 Message: lib/rcu: Defining dependency "rcu" 00:02:13.424 Message: lib/mempool: Defining dependency "mempool" 00:02:13.424 Message: lib/mbuf: Defining dependency "mbuf" 00:02:13.424 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:13.424 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:13.424 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:13.424 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:13.424 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:13.424 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:13.424 Compiler for C supports arguments -mpclmul: YES 00:02:13.424 Compiler for C supports arguments -maes: YES 00:02:13.424 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:13.424 Compiler for C supports arguments -mavx512bw: YES 00:02:13.424 Compiler for C supports arguments -mavx512dq: YES 00:02:13.424 Compiler for C supports arguments -mavx512vl: YES 00:02:13.424 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:13.424 Compiler for C supports arguments -mavx2: YES 00:02:13.424 Compiler for C supports arguments -mavx: YES 00:02:13.424 Message: lib/net: Defining dependency "net" 00:02:13.424 Message: lib/meter: Defining dependency "meter" 00:02:13.424 Message: lib/ethdev: Defining dependency "ethdev" 00:02:13.424 Message: lib/pci: Defining dependency "pci" 00:02:13.424 Message: lib/cmdline: Defining dependency "cmdline" 00:02:13.424 Message: lib/metrics: Defining dependency "metrics" 00:02:13.424 Message: lib/hash: Defining dependency "hash" 00:02:13.424 Message: lib/timer: Defining dependency "timer" 00:02:13.424 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:13.424 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:13.424 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:13.424 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:13.424 Message: lib/acl: Defining dependency "acl" 00:02:13.424 Message: lib/bbdev: Defining dependency "bbdev" 00:02:13.424 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:13.424 Run-time dependency libelf found: YES 0.191 00:02:13.424 Message: lib/bpf: Defining dependency "bpf" 00:02:13.424 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:13.424 Message: lib/compressdev: Defining dependency "compressdev" 00:02:13.424 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:13.424 Message: lib/distributor: Defining dependency "distributor" 00:02:13.424 Message: lib/dmadev: Defining dependency "dmadev" 00:02:13.424 Message: lib/efd: Defining dependency "efd" 00:02:13.424 Message: lib/eventdev: Defining dependency "eventdev" 00:02:13.424 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:13.424 Message: lib/gpudev: Defining dependency "gpudev" 00:02:13.424 Message: lib/gro: Defining dependency "gro" 00:02:13.424 Message: lib/gso: Defining dependency "gso" 00:02:13.424 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:13.424 Message: lib/jobstats: Defining dependency "jobstats" 00:02:13.424 Message: lib/latencystats: Defining dependency "latencystats" 00:02:13.424 Message: lib/lpm: Defining dependency "lpm" 00:02:13.424 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:13.424 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:13.424 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:13.424 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:13.424 Message: lib/member: Defining dependency "member" 00:02:13.424 Message: lib/pcapng: Defining dependency "pcapng" 00:02:13.424 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:13.424 Message: lib/power: Defining dependency "power" 00:02:13.425 Message: lib/rawdev: Defining dependency "rawdev" 00:02:13.425 Message: lib/regexdev: Defining dependency "regexdev" 00:02:13.425 Message: lib/mldev: Defining dependency "mldev" 00:02:13.425 Message: lib/rib: Defining dependency "rib" 00:02:13.425 Message: lib/reorder: Defining dependency "reorder" 00:02:13.425 Message: lib/sched: Defining dependency "sched" 00:02:13.425 Message: lib/security: Defining dependency "security" 00:02:13.425 Message: lib/stack: Defining dependency "stack" 00:02:13.425 Has header "linux/userfaultfd.h" : YES 00:02:13.425 Has header "linux/vduse.h" : YES 00:02:13.425 Message: lib/vhost: Defining dependency "vhost" 00:02:13.425 Message: lib/ipsec: Defining dependency "ipsec" 00:02:13.425 Message: lib/pdcp: Defining dependency "pdcp" 00:02:13.425 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:13.425 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:13.425 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:13.425 Message: lib/fib: Defining dependency "fib" 00:02:13.425 Message: lib/port: Defining dependency "port" 00:02:13.425 Message: lib/pdump: Defining dependency "pdump" 00:02:13.425 Message: lib/table: Defining dependency "table" 00:02:13.425 Message: lib/pipeline: Defining dependency "pipeline" 00:02:13.425 Message: lib/graph: Defining dependency "graph" 00:02:13.425 Message: lib/node: Defining dependency "node" 00:02:13.425 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:13.425 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:13.425 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:13.994 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:13.994 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:13.994 Compiler for C supports arguments -Wno-unused-value: YES 00:02:13.994 Compiler for C supports arguments -Wno-format: YES 00:02:13.994 Compiler for C supports arguments -Wno-format-security: YES 00:02:13.994 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:13.994 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:13.994 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:13.994 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:13.994 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:13.994 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:13.994 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:13.994 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:13.994 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:13.994 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:13.994 Has header "sys/epoll.h" : YES 00:02:13.994 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:13.994 Configuring doxy-api-html.conf using configuration 00:02:13.994 Configuring doxy-api-man.conf using configuration 00:02:13.994 Program mandb found: YES (/usr/bin/mandb) 00:02:13.994 Program sphinx-build found: NO 00:02:13.994 Configuring rte_build_config.h using configuration 00:02:13.994 Message: 00:02:13.994 ================= 00:02:13.994 Applications Enabled 00:02:13.994 ================= 00:02:13.994 00:02:13.994 apps: 00:02:13.994 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:13.994 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:13.994 test-pmd, test-regex, test-sad, test-security-perf, 00:02:13.994 00:02:13.994 Message: 00:02:13.994 ================= 00:02:13.994 Libraries Enabled 00:02:13.994 ================= 00:02:13.994 00:02:13.994 libs: 00:02:13.994 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:13.994 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:13.994 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:13.994 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:13.994 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:13.994 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:13.994 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:13.994 00:02:13.994 00:02:13.994 Message: 00:02:13.994 =============== 00:02:13.994 Drivers Enabled 00:02:13.994 =============== 00:02:13.994 00:02:13.994 common: 00:02:13.994 00:02:13.994 bus: 00:02:13.994 pci, vdev, 00:02:13.994 mempool: 00:02:13.994 ring, 00:02:13.994 dma: 00:02:13.994 00:02:13.994 net: 00:02:13.994 i40e, 00:02:13.994 raw: 00:02:13.994 00:02:13.994 crypto: 00:02:13.994 00:02:13.994 compress: 00:02:13.994 00:02:13.994 regex: 00:02:13.994 00:02:13.994 ml: 00:02:13.994 00:02:13.994 vdpa: 00:02:13.994 00:02:13.994 event: 00:02:13.994 00:02:13.994 baseband: 00:02:13.994 00:02:13.994 gpu: 00:02:13.994 00:02:13.994 00:02:13.994 Message: 00:02:13.994 ================= 00:02:13.994 Content Skipped 00:02:13.994 ================= 00:02:13.994 00:02:13.994 apps: 00:02:13.994 00:02:13.994 libs: 00:02:13.994 00:02:13.994 drivers: 00:02:13.994 common/cpt: not in enabled drivers build config 00:02:13.994 common/dpaax: not in enabled drivers build config 00:02:13.994 common/iavf: not in enabled drivers build config 00:02:13.994 common/idpf: not in enabled drivers build config 00:02:13.994 common/mvep: not in enabled drivers build config 00:02:13.994 common/octeontx: not in enabled drivers build config 00:02:13.994 bus/auxiliary: not in enabled drivers build config 00:02:13.994 bus/cdx: not in enabled drivers build config 00:02:13.994 bus/dpaa: not in enabled drivers build config 00:02:13.994 bus/fslmc: not in enabled drivers build config 00:02:13.994 bus/ifpga: not in enabled drivers build config 00:02:13.994 bus/platform: not in enabled drivers build config 00:02:13.994 bus/vmbus: not in enabled drivers build config 00:02:13.994 common/cnxk: not in enabled drivers build config 00:02:13.994 common/mlx5: not in enabled drivers build config 00:02:13.994 common/nfp: not in enabled drivers build config 00:02:13.994 common/qat: not in enabled drivers build config 00:02:13.994 common/sfc_efx: not in enabled drivers build config 00:02:13.994 mempool/bucket: not in enabled drivers build config 00:02:13.994 mempool/cnxk: not in enabled drivers build config 00:02:13.994 mempool/dpaa: not in enabled drivers build config 00:02:13.994 mempool/dpaa2: not in enabled drivers build config 00:02:13.994 mempool/octeontx: not in enabled drivers build config 00:02:13.994 mempool/stack: not in enabled drivers build config 00:02:13.994 dma/cnxk: not in enabled drivers build config 00:02:13.994 dma/dpaa: not in enabled drivers build config 00:02:13.994 dma/dpaa2: not in enabled drivers build config 00:02:13.994 dma/hisilicon: not in enabled drivers build config 00:02:13.994 dma/idxd: not in enabled drivers build config 00:02:13.994 dma/ioat: not in enabled drivers build config 00:02:13.994 dma/skeleton: not in enabled drivers build config 00:02:13.994 net/af_packet: not in enabled drivers build config 00:02:13.994 net/af_xdp: not in enabled drivers build config 00:02:13.994 net/ark: not in enabled drivers build config 00:02:13.994 net/atlantic: not in enabled drivers build config 00:02:13.994 net/avp: not in enabled drivers build config 00:02:13.994 net/axgbe: not in enabled drivers build config 00:02:13.994 net/bnx2x: not in enabled drivers build config 00:02:13.994 net/bnxt: not in enabled drivers build config 00:02:13.994 net/bonding: not in enabled drivers build config 00:02:13.994 net/cnxk: not in enabled drivers build config 00:02:13.994 net/cpfl: not in enabled drivers build config 00:02:13.994 net/cxgbe: not in enabled drivers build config 00:02:13.994 net/dpaa: not in enabled drivers build config 00:02:13.994 net/dpaa2: not in enabled drivers build config 00:02:13.994 net/e1000: not in enabled drivers build config 00:02:13.994 net/ena: not in enabled drivers build config 00:02:13.994 net/enetc: not in enabled drivers build config 00:02:13.994 net/enetfec: not in enabled drivers build config 00:02:13.994 net/enic: not in enabled drivers build config 00:02:13.994 net/failsafe: not in enabled drivers build config 00:02:13.994 net/fm10k: not in enabled drivers build config 00:02:13.994 net/gve: not in enabled drivers build config 00:02:13.994 net/hinic: not in enabled drivers build config 00:02:13.994 net/hns3: not in enabled drivers build config 00:02:13.995 net/iavf: not in enabled drivers build config 00:02:13.995 net/ice: not in enabled drivers build config 00:02:13.995 net/idpf: not in enabled drivers build config 00:02:13.995 net/igc: not in enabled drivers build config 00:02:13.995 net/ionic: not in enabled drivers build config 00:02:13.995 net/ipn3ke: not in enabled drivers build config 00:02:13.995 net/ixgbe: not in enabled drivers build config 00:02:13.995 net/mana: not in enabled drivers build config 00:02:13.995 net/memif: not in enabled drivers build config 00:02:13.995 net/mlx4: not in enabled drivers build config 00:02:13.995 net/mlx5: not in enabled drivers build config 00:02:13.995 net/mvneta: not in enabled drivers build config 00:02:13.995 net/mvpp2: not in enabled drivers build config 00:02:13.995 net/netvsc: not in enabled drivers build config 00:02:13.995 net/nfb: not in enabled drivers build config 00:02:13.995 net/nfp: not in enabled drivers build config 00:02:13.995 net/ngbe: not in enabled drivers build config 00:02:13.995 net/null: not in enabled drivers build config 00:02:13.995 net/octeontx: not in enabled drivers build config 00:02:13.995 net/octeon_ep: not in enabled drivers build config 00:02:13.995 net/pcap: not in enabled drivers build config 00:02:13.995 net/pfe: not in enabled drivers build config 00:02:13.995 net/qede: not in enabled drivers build config 00:02:13.995 net/ring: not in enabled drivers build config 00:02:13.995 net/sfc: not in enabled drivers build config 00:02:13.995 net/softnic: not in enabled drivers build config 00:02:13.995 net/tap: not in enabled drivers build config 00:02:13.995 net/thunderx: not in enabled drivers build config 00:02:13.995 net/txgbe: not in enabled drivers build config 00:02:13.995 net/vdev_netvsc: not in enabled drivers build config 00:02:13.995 net/vhost: not in enabled drivers build config 00:02:13.995 net/virtio: not in enabled drivers build config 00:02:13.995 net/vmxnet3: not in enabled drivers build config 00:02:13.995 raw/cnxk_bphy: not in enabled drivers build config 00:02:13.995 raw/cnxk_gpio: not in enabled drivers build config 00:02:13.995 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:13.995 raw/ifpga: not in enabled drivers build config 00:02:13.995 raw/ntb: not in enabled drivers build config 00:02:13.995 raw/skeleton: not in enabled drivers build config 00:02:13.995 crypto/armv8: not in enabled drivers build config 00:02:13.995 crypto/bcmfs: not in enabled drivers build config 00:02:13.995 crypto/caam_jr: not in enabled drivers build config 00:02:13.995 crypto/ccp: not in enabled drivers build config 00:02:13.995 crypto/cnxk: not in enabled drivers build config 00:02:13.995 crypto/dpaa_sec: not in enabled drivers build config 00:02:13.995 crypto/dpaa2_sec: not in enabled drivers build config 00:02:13.995 crypto/ipsec_mb: not in enabled drivers build config 00:02:13.995 crypto/mlx5: not in enabled drivers build config 00:02:13.995 crypto/mvsam: not in enabled drivers build config 00:02:13.995 crypto/nitrox: not in enabled drivers build config 00:02:13.995 crypto/null: not in enabled drivers build config 00:02:13.995 crypto/octeontx: not in enabled drivers build config 00:02:13.995 crypto/openssl: not in enabled drivers build config 00:02:13.995 crypto/scheduler: not in enabled drivers build config 00:02:13.995 crypto/uadk: not in enabled drivers build config 00:02:13.995 crypto/virtio: not in enabled drivers build config 00:02:13.995 compress/isal: not in enabled drivers build config 00:02:13.995 compress/mlx5: not in enabled drivers build config 00:02:13.995 compress/octeontx: not in enabled drivers build config 00:02:13.995 compress/zlib: not in enabled drivers build config 00:02:13.995 regex/mlx5: not in enabled drivers build config 00:02:13.995 regex/cn9k: not in enabled drivers build config 00:02:13.995 ml/cnxk: not in enabled drivers build config 00:02:13.995 vdpa/ifc: not in enabled drivers build config 00:02:13.995 vdpa/mlx5: not in enabled drivers build config 00:02:13.995 vdpa/nfp: not in enabled drivers build config 00:02:13.995 vdpa/sfc: not in enabled drivers build config 00:02:13.995 event/cnxk: not in enabled drivers build config 00:02:13.995 event/dlb2: not in enabled drivers build config 00:02:13.995 event/dpaa: not in enabled drivers build config 00:02:13.995 event/dpaa2: not in enabled drivers build config 00:02:13.995 event/dsw: not in enabled drivers build config 00:02:13.995 event/opdl: not in enabled drivers build config 00:02:13.995 event/skeleton: not in enabled drivers build config 00:02:13.995 event/sw: not in enabled drivers build config 00:02:13.995 event/octeontx: not in enabled drivers build config 00:02:13.995 baseband/acc: not in enabled drivers build config 00:02:13.995 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:13.995 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:13.995 baseband/la12xx: not in enabled drivers build config 00:02:13.995 baseband/null: not in enabled drivers build config 00:02:13.995 baseband/turbo_sw: not in enabled drivers build config 00:02:13.995 gpu/cuda: not in enabled drivers build config 00:02:13.995 00:02:13.995 00:02:13.995 Build targets in project: 217 00:02:13.995 00:02:13.995 DPDK 23.11.0 00:02:13.995 00:02:13.995 User defined options 00:02:13.995 libdir : lib 00:02:13.995 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:13.995 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:13.995 c_link_args : 00:02:13.995 enable_docs : false 00:02:13.995 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:13.995 enable_kmods : false 00:02:13.995 machine : native 00:02:13.995 tests : false 00:02:13.995 00:02:13.995 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:13.995 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:13.995 16:15:05 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:14.254 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:14.254 [1/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:14.254 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:14.254 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:14.254 [4/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:14.254 [5/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:14.254 [6/707] Linking static target lib/librte_kvargs.a 00:02:14.254 [7/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:14.254 [8/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:14.254 [9/707] Linking static target lib/librte_log.a 00:02:14.511 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:14.511 [11/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.511 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:14.511 [13/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:14.511 [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:14.769 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:14.769 [16/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.769 [17/707] Linking target lib/librte_log.so.24.0 00:02:14.770 [18/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:14.770 [19/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:14.770 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:15.029 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:15.029 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:15.029 [23/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:15.029 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:15.029 [25/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:15.029 [26/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:15.029 [27/707] Linking target lib/librte_kvargs.so.24.0 00:02:15.029 [28/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:15.029 [29/707] Linking static target lib/librte_telemetry.a 00:02:15.029 [30/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:15.288 [31/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:15.288 [32/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:15.288 [33/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:15.288 [34/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:15.288 [35/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:15.288 [36/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:15.288 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:15.288 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:15.548 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:15.548 [40/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.548 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:15.548 [42/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:15.548 [43/707] Linking target lib/librte_telemetry.so.24.0 00:02:15.548 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:15.548 [45/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:15.548 [46/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:15.548 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:15.807 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:15.807 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:15.807 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:15.807 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:15.807 [52/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:15.807 [53/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:15.807 [54/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:16.067 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:16.067 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:16.067 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:16.067 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:16.067 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:16.067 [60/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:16.067 [61/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:16.067 [62/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:16.067 [63/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:16.067 [64/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:16.326 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:16.326 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:16.326 [67/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:16.326 [68/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:16.327 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:16.586 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:16.586 [71/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:16.586 [72/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:16.586 [73/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:16.586 [74/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:16.586 [75/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:16.586 [76/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:16.586 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:16.586 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:16.845 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:16.845 [80/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:16.845 [81/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:16.845 [82/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:16.845 [83/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:16.845 [84/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:16.845 [85/707] Linking static target lib/librte_ring.a 00:02:17.106 [86/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:17.106 [87/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:17.106 [88/707] Linking static target lib/librte_eal.a 00:02:17.106 [89/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:17.106 [90/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.106 [91/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:17.106 [92/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:17.106 [93/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:17.369 [94/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:17.369 [95/707] Linking static target lib/librte_mempool.a 00:02:17.369 [96/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:17.369 [97/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:17.369 [98/707] Linking static target lib/librte_rcu.a 00:02:17.636 [99/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:17.636 [100/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:17.636 [101/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:17.636 [102/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:17.636 [103/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:17.636 [104/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:17.636 [105/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.636 [106/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.896 [107/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:17.896 [108/707] Linking static target lib/librte_net.a 00:02:17.896 [109/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:17.896 [110/707] Linking static target lib/librte_mbuf.a 00:02:17.896 [111/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:17.896 [112/707] Linking static target lib/librte_meter.a 00:02:17.896 [113/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:18.156 [114/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.156 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:18.156 [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:18.156 [117/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.156 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:18.415 [119/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.415 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:18.415 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:18.676 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:18.936 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:18.936 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:18.936 [125/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:18.936 [126/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:18.936 [127/707] Linking static target lib/librte_pci.a 00:02:18.936 [128/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:18.936 [129/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:18.936 [130/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:18.936 [131/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:18.936 [132/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:18.936 [133/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.196 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:19.196 [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:19.196 [136/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:19.196 [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:19.196 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:19.196 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:19.196 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:19.196 [141/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:19.196 [142/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:19.196 [143/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:19.196 [144/707] Linking static target lib/librte_cmdline.a 00:02:19.457 [145/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:19.457 [146/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:19.457 [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:19.457 [148/707] Linking static target lib/librte_metrics.a 00:02:19.717 [149/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:19.717 [150/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:19.977 [151/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.977 [152/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:19.977 [153/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:19.977 [154/707] Linking static target lib/librte_timer.a 00:02:19.977 [155/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.237 [156/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:20.237 [157/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.497 [158/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:20.497 [159/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:20.497 [160/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:20.757 [161/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:20.757 [162/707] Linking static target lib/librte_bitratestats.a 00:02:21.016 [163/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:21.016 [164/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.016 [165/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:21.016 [166/707] Linking static target lib/librte_bbdev.a 00:02:21.361 [167/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:21.361 [168/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:21.361 [169/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:21.361 [170/707] Linking static target lib/librte_hash.a 00:02:21.621 [171/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:21.621 [172/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:21.621 [173/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.621 [174/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:21.621 [175/707] Linking static target lib/librte_ethdev.a 00:02:21.881 [176/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:21.881 [177/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:21.881 [178/707] Linking static target lib/acl/libavx2_tmp.a 00:02:21.881 [179/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:21.881 [180/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.881 [181/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.881 [182/707] Linking target lib/librte_eal.so.24.0 00:02:21.881 [183/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:22.141 [184/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:22.141 [185/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:22.141 [186/707] Linking target lib/librte_ring.so.24.0 00:02:22.141 [187/707] Linking target lib/librte_meter.so.24.0 00:02:22.142 [188/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:22.142 [189/707] Linking target lib/librte_pci.so.24.0 00:02:22.142 [190/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:22.142 [191/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:22.401 [192/707] Linking target lib/librte_rcu.so.24.0 00:02:22.401 [193/707] Linking target lib/librte_mempool.so.24.0 00:02:22.401 [194/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:22.401 [195/707] Linking target lib/librte_timer.so.24.0 00:02:22.401 [196/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:22.401 [197/707] Linking static target lib/librte_cfgfile.a 00:02:22.401 [198/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:22.401 [199/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:22.401 [200/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:22.401 [201/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:22.402 [202/707] Linking target lib/librte_mbuf.so.24.0 00:02:22.402 [203/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:22.402 [204/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:22.661 [205/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:22.661 [206/707] Linking target lib/librte_net.so.24.0 00:02:22.661 [207/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:22.661 [208/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.661 [209/707] Linking target lib/librte_bbdev.so.24.0 00:02:22.661 [210/707] Linking static target lib/librte_bpf.a 00:02:22.661 [211/707] Linking target lib/librte_cfgfile.so.24.0 00:02:22.661 [212/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:22.661 [213/707] Linking target lib/librte_cmdline.so.24.0 00:02:22.661 [214/707] Linking target lib/librte_hash.so.24.0 00:02:22.661 [215/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:22.921 [216/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:22.921 [217/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:22.921 [218/707] Linking static target lib/librte_compressdev.a 00:02:22.921 [219/707] Linking static target lib/librte_acl.a 00:02:22.921 [220/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.921 [221/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:22.921 [222/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:23.180 [223/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:23.180 [224/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:23.180 [225/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.180 [226/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:23.180 [227/707] Linking static target lib/librte_distributor.a 00:02:23.180 [228/707] Linking target lib/librte_acl.so.24.0 00:02:23.180 [229/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:23.180 [230/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:23.180 [231/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.440 [232/707] Linking target lib/librte_compressdev.so.24.0 00:02:23.440 [233/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.440 [234/707] Linking target lib/librte_distributor.so.24.0 00:02:23.440 [235/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:23.440 [236/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:23.440 [237/707] Linking static target lib/librte_dmadev.a 00:02:23.699 [238/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:23.699 [239/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.699 [240/707] Linking target lib/librte_dmadev.so.24.0 00:02:23.956 [241/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:23.956 [242/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:23.956 [243/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:23.956 [244/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:23.956 [245/707] Linking static target lib/librte_efd.a 00:02:24.215 [246/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:24.215 [247/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:24.215 [248/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.215 [249/707] Linking static target lib/librte_cryptodev.a 00:02:24.215 [250/707] Linking target lib/librte_efd.so.24.0 00:02:24.475 [251/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:24.475 [252/707] Linking static target lib/librte_dispatcher.a 00:02:24.475 [253/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:24.475 [254/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:24.475 [255/707] Linking static target lib/librte_gpudev.a 00:02:24.735 [256/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:24.735 [257/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.735 [258/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:24.735 [259/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:24.995 [260/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:25.268 [261/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.268 [262/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.268 [263/707] Linking target lib/librte_cryptodev.so.24.0 00:02:25.268 [264/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:25.268 [265/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:25.268 [266/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:25.268 [267/707] Linking target lib/librte_gpudev.so.24.0 00:02:25.268 [268/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:25.268 [269/707] Linking static target lib/librte_gro.a 00:02:25.268 [270/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:25.545 [271/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:25.545 [272/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:25.545 [273/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.545 [274/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.545 [275/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:25.545 [276/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:25.545 [277/707] Linking target lib/librte_ethdev.so.24.0 00:02:25.545 [278/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:25.545 [279/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:25.545 [280/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:25.545 [281/707] Linking static target lib/librte_gso.a 00:02:25.545 [282/707] Linking static target lib/librte_eventdev.a 00:02:25.805 [283/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:25.805 [284/707] Linking target lib/librte_metrics.so.24.0 00:02:25.805 [285/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.805 [286/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:25.805 [287/707] Linking target lib/librte_bpf.so.24.0 00:02:25.805 [288/707] Linking target lib/librte_gro.so.24.0 00:02:25.805 [289/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:25.805 [290/707] Linking target lib/librte_gso.so.24.0 00:02:25.805 [291/707] Linking target lib/librte_bitratestats.so.24.0 00:02:25.805 [292/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:25.805 [293/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:25.805 [294/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:25.805 [295/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:25.805 [296/707] Linking static target lib/librte_jobstats.a 00:02:26.065 [297/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:26.065 [298/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:26.065 [299/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:26.065 [300/707] Linking static target lib/librte_ip_frag.a 00:02:26.065 [301/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.065 [302/707] Linking target lib/librte_jobstats.so.24.0 00:02:26.324 [303/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:26.324 [304/707] Linking static target lib/librte_latencystats.a 00:02:26.324 [305/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.324 [306/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:26.324 [307/707] Linking target lib/librte_ip_frag.so.24.0 00:02:26.324 [308/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:26.324 [309/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:26.324 [310/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:26.583 [311/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:26.583 [312/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.583 [313/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:26.583 [314/707] Linking target lib/librte_latencystats.so.24.0 00:02:26.583 [315/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:26.583 [316/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:26.843 [317/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:26.843 [318/707] Linking static target lib/librte_lpm.a 00:02:26.843 [319/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:26.843 [320/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:26.843 [321/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:27.104 [322/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:27.104 [323/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:27.104 [324/707] Linking static target lib/librte_pcapng.a 00:02:27.104 [325/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:27.104 [326/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:27.104 [327/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.104 [328/707] Linking target lib/librte_lpm.so.24.0 00:02:27.363 [329/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.363 [330/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:27.363 [331/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:27.364 [332/707] Linking target lib/librte_pcapng.so.24.0 00:02:27.364 [333/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:27.364 [334/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:27.364 [335/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.364 [336/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:27.364 [337/707] Linking target lib/librte_eventdev.so.24.0 00:02:27.623 [338/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:27.623 [339/707] Linking static target lib/librte_power.a 00:02:27.623 [340/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:27.623 [341/707] Linking target lib/librte_dispatcher.so.24.0 00:02:27.623 [342/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:27.623 [343/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:27.623 [344/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:27.623 [345/707] Linking static target lib/librte_regexdev.a 00:02:27.623 [346/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:27.623 [347/707] Linking static target lib/librte_rawdev.a 00:02:27.883 [348/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:27.883 [349/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:27.883 [350/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:27.883 [351/707] Linking static target lib/librte_member.a 00:02:27.883 [352/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:27.883 [353/707] Linking static target lib/librte_mldev.a 00:02:27.883 [354/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.143 [355/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.143 [356/707] Linking target lib/librte_rawdev.so.24.0 00:02:28.143 [357/707] Linking target lib/librte_power.so.24.0 00:02:28.143 [358/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.143 [359/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:28.143 [360/707] Linking target lib/librte_member.so.24.0 00:02:28.143 [361/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:28.143 [362/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:28.143 [363/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:28.143 [364/707] Linking static target lib/librte_reorder.a 00:02:28.143 [365/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.143 [366/707] Linking target lib/librte_regexdev.so.24.0 00:02:28.403 [367/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:28.403 [368/707] Linking static target lib/librte_rib.a 00:02:28.403 [369/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:28.403 [370/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:28.403 [371/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:28.403 [372/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.403 [373/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:28.403 [374/707] Linking target lib/librte_reorder.so.24.0 00:02:28.403 [375/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:28.403 [376/707] Linking static target lib/librte_stack.a 00:02:28.663 [377/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:28.663 [378/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:28.663 [379/707] Linking static target lib/librte_security.a 00:02:28.663 [380/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.663 [381/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.663 [382/707] Linking target lib/librte_stack.so.24.0 00:02:28.663 [383/707] Linking target lib/librte_rib.so.24.0 00:02:28.923 [384/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:28.923 [385/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:28.923 [386/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:28.923 [387/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.923 [388/707] Linking target lib/librte_mldev.so.24.0 00:02:28.923 [389/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.923 [390/707] Linking target lib/librte_security.so.24.0 00:02:29.182 [391/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:29.182 [392/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:29.182 [393/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:29.182 [394/707] Linking static target lib/librte_sched.a 00:02:29.182 [395/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:29.442 [396/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:29.442 [397/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.442 [398/707] Linking target lib/librte_sched.so.24.0 00:02:29.442 [399/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:29.701 [400/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:29.701 [401/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:29.701 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:29.701 [403/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:29.960 [404/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:29.960 [405/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:29.960 [406/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:29.960 [407/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:30.219 [408/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:30.219 [409/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:30.219 [410/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:30.478 [411/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:30.478 [412/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:30.478 [413/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:30.478 [414/707] Linking static target lib/librte_ipsec.a 00:02:30.478 [415/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:30.738 [416/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.738 [417/707] Linking target lib/librte_ipsec.so.24.0 00:02:30.738 [418/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:30.738 [419/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:30.738 [420/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:30.998 [421/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:30.998 [422/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:30.998 [423/707] Linking static target lib/librte_fib.a 00:02:30.998 [424/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:30.998 [425/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:31.257 [426/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.257 [427/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:31.257 [428/707] Linking target lib/librte_fib.so.24.0 00:02:31.257 [429/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:31.517 [430/707] Linking static target lib/librte_pdcp.a 00:02:31.517 [431/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:31.517 [432/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:31.517 [433/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.517 [434/707] Linking target lib/librte_pdcp.so.24.0 00:02:31.777 [435/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:31.777 [436/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:32.038 [437/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:32.038 [438/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:32.038 [439/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:32.038 [440/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:32.298 [441/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:32.298 [442/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:32.558 [443/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:32.558 [444/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:32.558 [445/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:32.558 [446/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:32.558 [447/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:32.558 [448/707] Linking static target lib/librte_port.a 00:02:32.558 [449/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:32.558 [450/707] Linking static target lib/librte_pdump.a 00:02:32.558 [451/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:32.818 [452/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:32.818 [453/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.818 [454/707] Linking target lib/librte_pdump.so.24.0 00:02:32.818 [455/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:33.078 [456/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.078 [457/707] Linking target lib/librte_port.so.24.0 00:02:33.078 [458/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:33.337 [459/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:33.337 [460/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:33.337 [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:33.337 [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:33.337 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:33.596 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:33.596 [465/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:33.596 [466/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:33.596 [467/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:33.596 [468/707] Linking static target lib/librte_table.a 00:02:33.855 [469/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:33.855 [470/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:34.115 [471/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:34.115 [472/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.115 [473/707] Linking target lib/librte_table.so.24.0 00:02:34.375 [474/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:34.375 [475/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:34.375 [476/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:34.375 [477/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:34.375 [478/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:34.634 [479/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:34.635 [480/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:34.635 [481/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:34.635 [482/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:34.895 [483/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:35.154 [484/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:35.154 [485/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:35.154 [486/707] Linking static target lib/librte_graph.a 00:02:35.154 [487/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:35.154 [488/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:35.414 [489/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:35.414 [490/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:35.673 [491/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:35.673 [492/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.673 [493/707] Linking target lib/librte_graph.so.24.0 00:02:35.673 [494/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:35.673 [495/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:35.673 [496/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:35.933 [497/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:35.933 [498/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:35.933 [499/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:35.933 [500/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:36.194 [501/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:36.194 [502/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:36.194 [503/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:36.194 [504/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:36.454 [505/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:36.454 [506/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:36.454 [507/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:36.454 [508/707] Linking static target lib/librte_node.a 00:02:36.454 [509/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:36.454 [510/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:36.454 [511/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:36.714 [512/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:36.714 [513/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.714 [514/707] Linking target lib/librte_node.so.24.0 00:02:36.714 [515/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:36.714 [516/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:36.714 [517/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:36.714 [518/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:36.974 [519/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:36.974 [520/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:36.974 [521/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:36.974 [522/707] Linking static target drivers/librte_bus_vdev.a 00:02:36.974 [523/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:36.974 [524/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:36.974 [525/707] Linking static target drivers/librte_bus_pci.a 00:02:36.974 [526/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:37.234 [527/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:37.234 [528/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.234 [529/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:37.234 [530/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:37.234 [531/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:37.234 [532/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:37.234 [533/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.234 [534/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:37.234 [535/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:37.234 [536/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:37.494 [537/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:37.494 [538/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:37.494 [539/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:37.494 [540/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:37.494 [541/707] Linking static target drivers/librte_mempool_ring.a 00:02:37.494 [542/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:37.494 [543/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:37.754 [544/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:38.014 [545/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:38.274 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:38.274 [547/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:38.534 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:38.794 [549/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:38.794 [550/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:38.794 [551/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:38.794 [552/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:39.106 [553/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:39.106 [554/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:39.106 [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:39.366 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:39.366 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:39.366 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:39.366 [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:39.626 [560/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:39.886 [561/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:39.886 [562/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:39.886 [563/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:40.146 [564/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:40.146 [565/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:40.146 [566/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:40.405 [567/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:40.406 [568/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:40.406 [569/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:40.406 [570/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:40.666 [571/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:40.666 [572/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:40.666 [573/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:40.666 [574/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:40.666 [575/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:40.926 [576/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:40.926 [577/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:40.926 [578/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:40.926 [579/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:40.926 [580/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:41.186 [581/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:41.186 [582/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:41.186 [583/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:41.186 [584/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:41.186 [585/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:41.186 [586/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:41.445 [587/707] Linking static target drivers/librte_net_i40e.a 00:02:41.445 [588/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:41.445 [589/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:41.445 [590/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:41.705 [591/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.964 [592/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:41.964 [593/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:41.964 [594/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:41.964 [595/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:42.223 [596/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:42.223 [597/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:42.223 [598/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:42.223 [599/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:42.483 [600/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:42.483 [601/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:42.743 [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:42.744 [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:42.744 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:42.744 [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:42.744 [606/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:43.003 [607/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:43.003 [608/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:43.003 [609/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:43.003 [610/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:43.003 [611/707] Linking static target lib/librte_vhost.a 00:02:43.003 [612/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:43.263 [613/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:43.263 [614/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:43.263 [615/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:43.522 [616/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:43.522 [617/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:43.522 [618/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:43.781 [619/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.041 [620/707] Linking target lib/librte_vhost.so.24.0 00:02:44.041 [621/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:44.301 [622/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:44.301 [623/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:44.302 [624/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:44.302 [625/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:44.562 [626/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:44.562 [627/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:44.562 [628/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:44.562 [629/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:44.562 [630/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:44.822 [631/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:44.822 [632/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:44.822 [633/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:44.822 [634/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:44.822 [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:45.082 [636/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:45.082 [637/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:45.082 [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:45.082 [639/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:45.342 [640/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:45.342 [641/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:45.342 [642/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:45.342 [643/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:45.602 [644/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:45.602 [645/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:45.602 [646/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:45.602 [647/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:45.602 [648/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:45.862 [649/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:45.862 [650/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:45.862 [651/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:46.123 [652/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:46.123 [653/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:46.123 [654/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:46.383 [655/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:46.383 [656/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:46.383 [657/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:46.383 [658/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:46.383 [659/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:46.647 [660/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:46.913 [661/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:46.913 [662/707] Linking static target lib/librte_pipeline.a 00:02:46.913 [663/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:46.913 [664/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:46.913 [665/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:46.913 [666/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:47.189 [667/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:47.189 [668/707] Linking target app/dpdk-dumpcap 00:02:47.450 [669/707] Linking target app/dpdk-graph 00:02:47.450 [670/707] Linking target app/dpdk-pdump 00:02:47.450 [671/707] Linking target app/dpdk-proc-info 00:02:47.450 [672/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:47.450 [673/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:47.709 [674/707] Linking target app/dpdk-test-acl 00:02:47.709 [675/707] Linking target app/dpdk-test-bbdev 00:02:47.709 [676/707] Linking target app/dpdk-test-cmdline 00:02:47.709 [677/707] Linking target app/dpdk-test-compress-perf 00:02:47.709 [678/707] Linking target app/dpdk-test-crypto-perf 00:02:47.968 [679/707] Linking target app/dpdk-test-dma-perf 00:02:47.968 [680/707] Linking target app/dpdk-test-eventdev 00:02:47.968 [681/707] Linking target app/dpdk-test-fib 00:02:47.968 [682/707] Linking target app/dpdk-test-flow-perf 00:02:48.227 [683/707] Linking target app/dpdk-test-pipeline 00:02:48.227 [684/707] Linking target app/dpdk-test-mldev 00:02:48.227 [685/707] Linking target app/dpdk-test-gpudev 00:02:48.487 [686/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:48.487 [687/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:48.487 [688/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:48.487 [689/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:48.748 [690/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:48.748 [691/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:49.006 [692/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:49.006 [693/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:49.006 [694/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.266 [695/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:49.266 [696/707] Linking target lib/librte_pipeline.so.24.0 00:02:49.266 [697/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:49.266 [698/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:49.266 [699/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:49.266 [700/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:49.525 [701/707] Linking target app/dpdk-test-sad 00:02:49.525 [702/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:49.784 [703/707] Linking target app/dpdk-test-regex 00:02:49.784 [704/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:49.784 [705/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:50.045 [706/707] Linking target app/dpdk-testpmd 00:02:50.045 [707/707] Linking target app/dpdk-test-security-perf 00:02:50.305 16:15:41 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:50.305 16:15:41 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:50.305 16:15:41 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:50.305 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:50.305 [0/1] Installing files. 00:02:50.568 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.568 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:50.569 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.570 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.571 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.572 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.573 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.574 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.574 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.574 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.574 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.574 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.574 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.574 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.574 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.574 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.574 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.574 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.574 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.574 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.574 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.574 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.574 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.574 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:50.574 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:50.574 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:50.574 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:50.574 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.574 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.575 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.149 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.149 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.149 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.149 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:51.149 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.149 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:51.149 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.150 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:51.150 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.150 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:51.150 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.150 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.150 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.150 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.150 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.150 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.150 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.150 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.150 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.150 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.150 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.150 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.150 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.150 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.150 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.150 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.150 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.150 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.150 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.150 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.151 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.152 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:51.153 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:51.153 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:02:51.153 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:02:51.153 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:02:51.153 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:51.153 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:02:51.153 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:51.153 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:02:51.153 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:51.153 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:02:51.153 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:51.153 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:02:51.153 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:51.153 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:02:51.153 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:51.153 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:02:51.153 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:51.153 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:02:51.153 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:51.153 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:02:51.153 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:51.153 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:02:51.153 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:51.153 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:02:51.153 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:51.153 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:02:51.153 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:51.153 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:02:51.153 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:51.153 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:02:51.153 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:51.153 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:02:51.153 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:51.153 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:02:51.153 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:51.153 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:02:51.153 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:51.153 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:02:51.153 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:51.153 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:02:51.153 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:51.153 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:02:51.153 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:51.153 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:02:51.153 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:51.153 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:02:51.153 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:51.153 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:02:51.153 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:51.153 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:02:51.153 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:51.153 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:02:51.153 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:51.153 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:02:51.153 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:51.153 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:02:51.153 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:02:51.153 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:02:51.153 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:51.153 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:02:51.153 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:51.153 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:02:51.153 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:51.153 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:02:51.153 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:51.153 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:02:51.153 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:51.153 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:02:51.153 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:51.153 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:02:51.153 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:51.153 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:02:51.153 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:51.153 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:02:51.153 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:51.153 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:02:51.153 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:51.153 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:02:51.153 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:51.153 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:02:51.153 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:51.153 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:02:51.153 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:02:51.154 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:02:51.154 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:51.154 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:02:51.154 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:51.154 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:02:51.154 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:51.154 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:02:51.154 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:51.154 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:51.154 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:51.154 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:51.154 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:51.154 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:51.154 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:51.154 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:51.154 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:51.154 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:51.154 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:51.154 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:51.154 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:51.154 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:02:51.154 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:51.154 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:02:51.154 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:51.154 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:02:51.154 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:51.154 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:02:51.154 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:02:51.154 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:02:51.154 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:51.154 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:02:51.154 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:51.154 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:02:51.154 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:51.154 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:02:51.154 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:51.154 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:02:51.154 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:51.154 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:02:51.154 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:51.154 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:02:51.154 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:51.154 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:51.154 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:51.154 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:51.154 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:51.154 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:51.154 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:51.154 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:51.154 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:51.154 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:51.154 16:15:42 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:51.154 16:15:42 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:51.154 00:02:51.154 real 0m44.464s 00:02:51.154 user 4m56.899s 00:02:51.154 sys 0m54.640s 00:02:51.154 16:15:42 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:51.154 16:15:42 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:51.154 ************************************ 00:02:51.154 END TEST build_native_dpdk 00:02:51.154 ************************************ 00:02:51.154 16:15:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:51.154 16:15:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:51.154 16:15:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:51.154 16:15:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:51.154 16:15:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:51.154 16:15:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:51.154 16:15:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:51.154 16:15:42 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:02:51.414 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:51.414 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.414 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:51.414 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:51.986 Using 'verbs' RDMA provider 00:03:08.259 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:26.362 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:26.362 Creating mk/config.mk...done. 00:03:26.362 Creating mk/cc.flags.mk...done. 00:03:26.362 Type 'make' to build. 00:03:26.362 16:16:16 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:26.362 16:16:16 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:26.362 16:16:16 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:26.362 16:16:16 -- common/autotest_common.sh@10 -- $ set +x 00:03:26.362 ************************************ 00:03:26.362 START TEST make 00:03:26.362 ************************************ 00:03:26.362 16:16:16 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:26.362 make[1]: Nothing to be done for 'all'. 00:04:13.058 CC lib/log/log.o 00:04:13.058 CC lib/log/log_flags.o 00:04:13.058 CC lib/log/log_deprecated.o 00:04:13.058 CC lib/ut/ut.o 00:04:13.058 CC lib/ut_mock/mock.o 00:04:13.058 LIB libspdk_log.a 00:04:13.058 LIB libspdk_ut.a 00:04:13.058 LIB libspdk_ut_mock.a 00:04:13.058 SO libspdk_log.so.7.0 00:04:13.058 SO libspdk_ut.so.2.0 00:04:13.058 SO libspdk_ut_mock.so.6.0 00:04:13.058 SYMLINK libspdk_ut.so 00:04:13.058 SYMLINK libspdk_log.so 00:04:13.058 SYMLINK libspdk_ut_mock.so 00:04:13.058 CC lib/dma/dma.o 00:04:13.058 CC lib/util/base64.o 00:04:13.058 CC lib/util/cpuset.o 00:04:13.058 CC lib/util/bit_array.o 00:04:13.058 CC lib/util/crc16.o 00:04:13.058 CC lib/util/crc32.o 00:04:13.058 CC lib/util/crc32c.o 00:04:13.058 CXX lib/trace_parser/trace.o 00:04:13.058 CC lib/ioat/ioat.o 00:04:13.058 CC lib/vfio_user/host/vfio_user_pci.o 00:04:13.058 CC lib/util/crc32_ieee.o 00:04:13.058 CC lib/util/crc64.o 00:04:13.058 CC lib/util/dif.o 00:04:13.058 CC lib/vfio_user/host/vfio_user.o 00:04:13.058 LIB libspdk_dma.a 00:04:13.058 CC lib/util/fd.o 00:04:13.058 SO libspdk_dma.so.5.0 00:04:13.058 CC lib/util/fd_group.o 00:04:13.058 CC lib/util/file.o 00:04:13.058 CC lib/util/hexlify.o 00:04:13.058 SYMLINK libspdk_dma.so 00:04:13.059 CC lib/util/iov.o 00:04:13.059 LIB libspdk_ioat.a 00:04:13.059 SO libspdk_ioat.so.7.0 00:04:13.059 CC lib/util/math.o 00:04:13.059 CC lib/util/net.o 00:04:13.059 SYMLINK libspdk_ioat.so 00:04:13.059 CC lib/util/pipe.o 00:04:13.059 LIB libspdk_vfio_user.a 00:04:13.059 CC lib/util/strerror_tls.o 00:04:13.059 CC lib/util/string.o 00:04:13.059 SO libspdk_vfio_user.so.5.0 00:04:13.059 CC lib/util/uuid.o 00:04:13.059 CC lib/util/xor.o 00:04:13.059 SYMLINK libspdk_vfio_user.so 00:04:13.059 CC lib/util/zipf.o 00:04:13.059 CC lib/util/md5.o 00:04:13.059 LIB libspdk_util.a 00:04:13.059 SO libspdk_util.so.10.0 00:04:13.059 SYMLINK libspdk_util.so 00:04:13.059 LIB libspdk_trace_parser.a 00:04:13.059 SO libspdk_trace_parser.so.6.0 00:04:13.059 SYMLINK libspdk_trace_parser.so 00:04:13.059 CC lib/vmd/vmd.o 00:04:13.059 CC lib/vmd/led.o 00:04:13.059 CC lib/rdma_utils/rdma_utils.o 00:04:13.059 CC lib/json/json_parse.o 00:04:13.059 CC lib/rdma_provider/common.o 00:04:13.059 CC lib/env_dpdk/env.o 00:04:13.059 CC lib/json/json_util.o 00:04:13.059 CC lib/env_dpdk/memory.o 00:04:13.059 CC lib/idxd/idxd.o 00:04:13.059 CC lib/conf/conf.o 00:04:13.059 CC lib/idxd/idxd_user.o 00:04:13.059 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:13.059 CC lib/env_dpdk/pci.o 00:04:13.059 LIB libspdk_conf.a 00:04:13.059 SO libspdk_conf.so.6.0 00:04:13.059 LIB libspdk_rdma_utils.a 00:04:13.059 CC lib/json/json_write.o 00:04:13.059 SO libspdk_rdma_utils.so.1.0 00:04:13.059 SYMLINK libspdk_conf.so 00:04:13.059 CC lib/idxd/idxd_kernel.o 00:04:13.059 SYMLINK libspdk_rdma_utils.so 00:04:13.059 CC lib/env_dpdk/init.o 00:04:13.059 LIB libspdk_rdma_provider.a 00:04:13.059 CC lib/env_dpdk/threads.o 00:04:13.059 SO libspdk_rdma_provider.so.6.0 00:04:13.059 SYMLINK libspdk_rdma_provider.so 00:04:13.059 CC lib/env_dpdk/pci_ioat.o 00:04:13.059 CC lib/env_dpdk/pci_virtio.o 00:04:13.059 CC lib/env_dpdk/pci_vmd.o 00:04:13.059 LIB libspdk_json.a 00:04:13.059 CC lib/env_dpdk/pci_idxd.o 00:04:13.059 CC lib/env_dpdk/pci_event.o 00:04:13.059 SO libspdk_json.so.6.0 00:04:13.059 CC lib/env_dpdk/sigbus_handler.o 00:04:13.059 CC lib/env_dpdk/pci_dpdk.o 00:04:13.059 SYMLINK libspdk_json.so 00:04:13.059 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:13.059 LIB libspdk_idxd.a 00:04:13.059 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:13.059 LIB libspdk_vmd.a 00:04:13.059 SO libspdk_idxd.so.12.1 00:04:13.059 SO libspdk_vmd.so.6.0 00:04:13.059 SYMLINK libspdk_idxd.so 00:04:13.059 SYMLINK libspdk_vmd.so 00:04:13.059 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:13.059 CC lib/jsonrpc/jsonrpc_client.o 00:04:13.059 CC lib/jsonrpc/jsonrpc_server.o 00:04:13.059 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:13.059 LIB libspdk_jsonrpc.a 00:04:13.059 SO libspdk_jsonrpc.so.6.0 00:04:13.059 SYMLINK libspdk_jsonrpc.so 00:04:13.059 LIB libspdk_env_dpdk.a 00:04:13.059 SO libspdk_env_dpdk.so.15.0 00:04:13.059 CC lib/rpc/rpc.o 00:04:13.059 SYMLINK libspdk_env_dpdk.so 00:04:13.059 LIB libspdk_rpc.a 00:04:13.059 SO libspdk_rpc.so.6.0 00:04:13.059 SYMLINK libspdk_rpc.so 00:04:13.059 CC lib/trace/trace.o 00:04:13.059 CC lib/trace/trace_flags.o 00:04:13.059 CC lib/keyring/keyring.o 00:04:13.059 CC lib/trace/trace_rpc.o 00:04:13.059 CC lib/keyring/keyring_rpc.o 00:04:13.059 CC lib/notify/notify.o 00:04:13.059 CC lib/notify/notify_rpc.o 00:04:13.059 LIB libspdk_notify.a 00:04:13.059 SO libspdk_notify.so.6.0 00:04:13.059 LIB libspdk_keyring.a 00:04:13.059 LIB libspdk_trace.a 00:04:13.059 SO libspdk_keyring.so.2.0 00:04:13.059 SYMLINK libspdk_notify.so 00:04:13.059 SO libspdk_trace.so.11.0 00:04:13.059 SYMLINK libspdk_keyring.so 00:04:13.059 SYMLINK libspdk_trace.so 00:04:13.059 CC lib/thread/iobuf.o 00:04:13.059 CC lib/thread/thread.o 00:04:13.059 CC lib/sock/sock.o 00:04:13.059 CC lib/sock/sock_rpc.o 00:04:13.059 LIB libspdk_sock.a 00:04:13.059 SO libspdk_sock.so.10.0 00:04:13.059 SYMLINK libspdk_sock.so 00:04:13.059 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:13.059 CC lib/nvme/nvme_ctrlr.o 00:04:13.059 CC lib/nvme/nvme_fabric.o 00:04:13.059 CC lib/nvme/nvme_ns_cmd.o 00:04:13.059 CC lib/nvme/nvme_pcie_common.o 00:04:13.059 CC lib/nvme/nvme_ns.o 00:04:13.059 CC lib/nvme/nvme.o 00:04:13.059 CC lib/nvme/nvme_pcie.o 00:04:13.059 CC lib/nvme/nvme_qpair.o 00:04:13.627 CC lib/nvme/nvme_quirks.o 00:04:13.627 CC lib/nvme/nvme_transport.o 00:04:13.627 CC lib/nvme/nvme_discovery.o 00:04:13.627 LIB libspdk_thread.a 00:04:13.627 SO libspdk_thread.so.10.1 00:04:13.886 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:13.886 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:13.886 CC lib/nvme/nvme_tcp.o 00:04:13.886 CC lib/nvme/nvme_opal.o 00:04:13.886 SYMLINK libspdk_thread.so 00:04:13.886 CC lib/nvme/nvme_io_msg.o 00:04:14.145 CC lib/nvme/nvme_poll_group.o 00:04:14.145 CC lib/nvme/nvme_zns.o 00:04:14.145 CC lib/nvme/nvme_stubs.o 00:04:14.405 CC lib/nvme/nvme_auth.o 00:04:14.405 CC lib/nvme/nvme_cuse.o 00:04:14.405 CC lib/nvme/nvme_rdma.o 00:04:14.405 CC lib/accel/accel.o 00:04:14.665 CC lib/blob/blobstore.o 00:04:14.665 CC lib/blob/request.o 00:04:14.665 CC lib/blob/zeroes.o 00:04:14.665 CC lib/blob/blob_bs_dev.o 00:04:14.924 CC lib/init/json_config.o 00:04:14.924 CC lib/virtio/virtio.o 00:04:14.924 CC lib/fsdev/fsdev.o 00:04:15.183 CC lib/init/subsystem.o 00:04:15.183 CC lib/init/subsystem_rpc.o 00:04:15.183 CC lib/init/rpc.o 00:04:15.183 CC lib/virtio/virtio_vhost_user.o 00:04:15.442 CC lib/virtio/virtio_vfio_user.o 00:04:15.442 CC lib/accel/accel_rpc.o 00:04:15.442 CC lib/fsdev/fsdev_io.o 00:04:15.442 LIB libspdk_init.a 00:04:15.442 SO libspdk_init.so.6.0 00:04:15.442 CC lib/accel/accel_sw.o 00:04:15.442 CC lib/fsdev/fsdev_rpc.o 00:04:15.442 SYMLINK libspdk_init.so 00:04:15.442 CC lib/virtio/virtio_pci.o 00:04:15.701 LIB libspdk_nvme.a 00:04:15.701 LIB libspdk_fsdev.a 00:04:15.701 CC lib/event/app.o 00:04:15.701 CC lib/event/reactor.o 00:04:15.701 CC lib/event/log_rpc.o 00:04:15.701 CC lib/event/scheduler_static.o 00:04:15.701 CC lib/event/app_rpc.o 00:04:15.701 SO libspdk_fsdev.so.1.0 00:04:15.701 LIB libspdk_accel.a 00:04:15.701 LIB libspdk_virtio.a 00:04:15.960 SYMLINK libspdk_fsdev.so 00:04:15.960 SO libspdk_nvme.so.14.0 00:04:15.960 SO libspdk_accel.so.16.0 00:04:15.960 SO libspdk_virtio.so.7.0 00:04:15.960 SYMLINK libspdk_virtio.so 00:04:15.960 SYMLINK libspdk_accel.so 00:04:16.219 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:16.219 SYMLINK libspdk_nvme.so 00:04:16.219 LIB libspdk_event.a 00:04:16.219 CC lib/bdev/bdev.o 00:04:16.219 CC lib/bdev/bdev_rpc.o 00:04:16.219 CC lib/bdev/part.o 00:04:16.219 CC lib/bdev/bdev_zone.o 00:04:16.219 CC lib/bdev/scsi_nvme.o 00:04:16.219 SO libspdk_event.so.14.0 00:04:16.479 SYMLINK libspdk_event.so 00:04:16.740 LIB libspdk_fuse_dispatcher.a 00:04:16.740 SO libspdk_fuse_dispatcher.so.1.0 00:04:17.000 SYMLINK libspdk_fuse_dispatcher.so 00:04:17.939 LIB libspdk_blob.a 00:04:17.939 SO libspdk_blob.so.11.0 00:04:17.939 SYMLINK libspdk_blob.so 00:04:18.508 CC lib/blobfs/tree.o 00:04:18.508 CC lib/blobfs/blobfs.o 00:04:18.508 CC lib/lvol/lvol.o 00:04:18.768 LIB libspdk_bdev.a 00:04:19.027 SO libspdk_bdev.so.16.0 00:04:19.027 SYMLINK libspdk_bdev.so 00:04:19.287 LIB libspdk_blobfs.a 00:04:19.287 CC lib/ftl/ftl_core.o 00:04:19.287 CC lib/ftl/ftl_init.o 00:04:19.287 CC lib/ftl/ftl_debug.o 00:04:19.287 CC lib/ftl/ftl_layout.o 00:04:19.287 CC lib/ublk/ublk.o 00:04:19.287 CC lib/nvmf/ctrlr.o 00:04:19.287 SO libspdk_blobfs.so.10.0 00:04:19.287 CC lib/scsi/dev.o 00:04:19.287 CC lib/nbd/nbd.o 00:04:19.287 SYMLINK libspdk_blobfs.so 00:04:19.287 CC lib/scsi/lun.o 00:04:19.287 LIB libspdk_lvol.a 00:04:19.287 SO libspdk_lvol.so.10.0 00:04:19.287 SYMLINK libspdk_lvol.so 00:04:19.287 CC lib/scsi/port.o 00:04:19.546 CC lib/nvmf/ctrlr_discovery.o 00:04:19.546 CC lib/nvmf/ctrlr_bdev.o 00:04:19.546 CC lib/nvmf/subsystem.o 00:04:19.546 CC lib/nvmf/nvmf.o 00:04:19.546 CC lib/nvmf/nvmf_rpc.o 00:04:19.546 CC lib/scsi/scsi.o 00:04:19.546 CC lib/ftl/ftl_io.o 00:04:19.546 CC lib/nbd/nbd_rpc.o 00:04:19.806 CC lib/scsi/scsi_bdev.o 00:04:19.806 LIB libspdk_nbd.a 00:04:19.806 CC lib/ublk/ublk_rpc.o 00:04:19.806 SO libspdk_nbd.so.7.0 00:04:19.806 CC lib/ftl/ftl_sb.o 00:04:19.806 SYMLINK libspdk_nbd.so 00:04:19.806 CC lib/nvmf/transport.o 00:04:19.806 CC lib/ftl/ftl_l2p.o 00:04:20.065 LIB libspdk_ublk.a 00:04:20.065 SO libspdk_ublk.so.3.0 00:04:20.065 CC lib/scsi/scsi_pr.o 00:04:20.065 SYMLINK libspdk_ublk.so 00:04:20.065 CC lib/scsi/scsi_rpc.o 00:04:20.065 CC lib/ftl/ftl_l2p_flat.o 00:04:20.065 CC lib/scsi/task.o 00:04:20.323 CC lib/ftl/ftl_nv_cache.o 00:04:20.323 CC lib/ftl/ftl_band.o 00:04:20.323 CC lib/ftl/ftl_band_ops.o 00:04:20.323 CC lib/nvmf/tcp.o 00:04:20.323 CC lib/nvmf/stubs.o 00:04:20.323 CC lib/nvmf/mdns_server.o 00:04:20.323 LIB libspdk_scsi.a 00:04:20.323 SO libspdk_scsi.so.9.0 00:04:20.582 SYMLINK libspdk_scsi.so 00:04:20.582 CC lib/nvmf/rdma.o 00:04:20.582 CC lib/ftl/ftl_writer.o 00:04:20.582 CC lib/nvmf/auth.o 00:04:20.841 CC lib/ftl/ftl_rq.o 00:04:20.841 CC lib/ftl/ftl_reloc.o 00:04:20.841 CC lib/ftl/ftl_l2p_cache.o 00:04:20.841 CC lib/iscsi/conn.o 00:04:20.841 CC lib/vhost/vhost.o 00:04:21.100 CC lib/ftl/ftl_p2l.o 00:04:21.100 CC lib/ftl/ftl_p2l_log.o 00:04:21.100 CC lib/ftl/mngt/ftl_mngt.o 00:04:21.358 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:21.359 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:21.359 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:21.359 CC lib/vhost/vhost_rpc.o 00:04:21.359 CC lib/vhost/vhost_scsi.o 00:04:21.359 CC lib/vhost/vhost_blk.o 00:04:21.359 CC lib/iscsi/init_grp.o 00:04:21.359 CC lib/iscsi/iscsi.o 00:04:21.618 CC lib/vhost/rte_vhost_user.o 00:04:21.618 CC lib/iscsi/param.o 00:04:21.618 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:21.618 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:21.877 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:21.877 CC lib/iscsi/portal_grp.o 00:04:21.877 CC lib/iscsi/tgt_node.o 00:04:21.877 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:21.877 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:22.136 CC lib/iscsi/iscsi_subsystem.o 00:04:22.136 CC lib/iscsi/iscsi_rpc.o 00:04:22.136 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:22.136 CC lib/iscsi/task.o 00:04:22.136 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:22.395 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:22.395 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:22.395 CC lib/ftl/utils/ftl_conf.o 00:04:22.395 CC lib/ftl/utils/ftl_md.o 00:04:22.395 CC lib/ftl/utils/ftl_mempool.o 00:04:22.395 LIB libspdk_vhost.a 00:04:22.395 CC lib/ftl/utils/ftl_bitmap.o 00:04:22.395 CC lib/ftl/utils/ftl_property.o 00:04:22.395 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:22.395 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:22.395 SO libspdk_vhost.so.8.0 00:04:22.654 SYMLINK libspdk_vhost.so 00:04:22.654 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:22.654 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:22.654 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:22.654 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:22.654 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:22.654 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:22.654 LIB libspdk_nvmf.a 00:04:22.654 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:22.913 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:22.913 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:22.913 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:22.913 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:22.913 LIB libspdk_iscsi.a 00:04:22.913 CC lib/ftl/base/ftl_base_dev.o 00:04:22.913 SO libspdk_nvmf.so.19.0 00:04:22.913 CC lib/ftl/base/ftl_base_bdev.o 00:04:22.913 CC lib/ftl/ftl_trace.o 00:04:22.913 SO libspdk_iscsi.so.8.0 00:04:23.172 SYMLINK libspdk_iscsi.so 00:04:23.172 SYMLINK libspdk_nvmf.so 00:04:23.172 LIB libspdk_ftl.a 00:04:23.431 SO libspdk_ftl.so.9.0 00:04:23.690 SYMLINK libspdk_ftl.so 00:04:23.950 CC module/env_dpdk/env_dpdk_rpc.o 00:04:24.210 CC module/keyring/linux/keyring.o 00:04:24.210 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:24.210 CC module/sock/posix/posix.o 00:04:24.210 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:24.210 CC module/accel/error/accel_error.o 00:04:24.210 CC module/blob/bdev/blob_bdev.o 00:04:24.210 CC module/keyring/file/keyring.o 00:04:24.210 CC module/fsdev/aio/fsdev_aio.o 00:04:24.210 CC module/scheduler/gscheduler/gscheduler.o 00:04:24.210 LIB libspdk_env_dpdk_rpc.a 00:04:24.210 SO libspdk_env_dpdk_rpc.so.6.0 00:04:24.210 SYMLINK libspdk_env_dpdk_rpc.so 00:04:24.210 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:24.210 CC module/keyring/linux/keyring_rpc.o 00:04:24.210 CC module/keyring/file/keyring_rpc.o 00:04:24.210 LIB libspdk_scheduler_dpdk_governor.a 00:04:24.210 LIB libspdk_scheduler_gscheduler.a 00:04:24.210 CC module/accel/error/accel_error_rpc.o 00:04:24.210 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:24.210 LIB libspdk_scheduler_dynamic.a 00:04:24.210 SO libspdk_scheduler_gscheduler.so.4.0 00:04:24.470 SO libspdk_scheduler_dynamic.so.4.0 00:04:24.470 SYMLINK libspdk_scheduler_gscheduler.so 00:04:24.470 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:24.470 CC module/fsdev/aio/linux_aio_mgr.o 00:04:24.470 SYMLINK libspdk_scheduler_dynamic.so 00:04:24.470 LIB libspdk_keyring_linux.a 00:04:24.470 LIB libspdk_keyring_file.a 00:04:24.470 LIB libspdk_blob_bdev.a 00:04:24.470 SO libspdk_keyring_linux.so.1.0 00:04:24.470 LIB libspdk_accel_error.a 00:04:24.470 SO libspdk_keyring_file.so.2.0 00:04:24.470 SO libspdk_blob_bdev.so.11.0 00:04:24.470 SO libspdk_accel_error.so.2.0 00:04:24.470 SYMLINK libspdk_keyring_linux.so 00:04:24.470 SYMLINK libspdk_blob_bdev.so 00:04:24.470 SYMLINK libspdk_keyring_file.so 00:04:24.470 SYMLINK libspdk_accel_error.so 00:04:24.470 CC module/accel/ioat/accel_ioat.o 00:04:24.470 CC module/accel/ioat/accel_ioat_rpc.o 00:04:24.470 CC module/accel/iaa/accel_iaa.o 00:04:24.470 CC module/accel/iaa/accel_iaa_rpc.o 00:04:24.470 CC module/accel/dsa/accel_dsa.o 00:04:24.470 CC module/accel/dsa/accel_dsa_rpc.o 00:04:24.729 LIB libspdk_accel_ioat.a 00:04:24.729 LIB libspdk_accel_iaa.a 00:04:24.729 CC module/bdev/delay/vbdev_delay.o 00:04:24.729 CC module/blobfs/bdev/blobfs_bdev.o 00:04:24.729 SO libspdk_accel_ioat.so.6.0 00:04:24.729 SO libspdk_accel_iaa.so.3.0 00:04:24.729 LIB libspdk_fsdev_aio.a 00:04:24.729 SYMLINK libspdk_accel_ioat.so 00:04:24.729 CC module/bdev/error/vbdev_error.o 00:04:24.729 LIB libspdk_accel_dsa.a 00:04:24.729 CC module/bdev/lvol/vbdev_lvol.o 00:04:24.729 CC module/bdev/gpt/gpt.o 00:04:24.729 SYMLINK libspdk_accel_iaa.so 00:04:24.729 CC module/bdev/error/vbdev_error_rpc.o 00:04:24.729 SO libspdk_fsdev_aio.so.1.0 00:04:24.988 SO libspdk_accel_dsa.so.5.0 00:04:24.988 LIB libspdk_sock_posix.a 00:04:24.988 SYMLINK libspdk_accel_dsa.so 00:04:24.988 SYMLINK libspdk_fsdev_aio.so 00:04:24.988 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:24.988 SO libspdk_sock_posix.so.6.0 00:04:24.988 CC module/bdev/malloc/bdev_malloc.o 00:04:24.988 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:24.988 CC module/bdev/gpt/vbdev_gpt.o 00:04:24.988 SYMLINK libspdk_sock_posix.so 00:04:24.988 CC module/bdev/nvme/bdev_nvme.o 00:04:24.988 CC module/bdev/null/bdev_null.o 00:04:24.988 LIB libspdk_bdev_error.a 00:04:24.988 LIB libspdk_blobfs_bdev.a 00:04:24.988 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:25.248 SO libspdk_bdev_error.so.6.0 00:04:25.248 SO libspdk_blobfs_bdev.so.6.0 00:04:25.248 CC module/bdev/passthru/vbdev_passthru.o 00:04:25.248 SYMLINK libspdk_blobfs_bdev.so 00:04:25.248 SYMLINK libspdk_bdev_error.so 00:04:25.248 CC module/bdev/null/bdev_null_rpc.o 00:04:25.248 LIB libspdk_bdev_delay.a 00:04:25.248 LIB libspdk_bdev_gpt.a 00:04:25.248 CC module/bdev/raid/bdev_raid.o 00:04:25.248 CC module/bdev/split/vbdev_split.o 00:04:25.248 SO libspdk_bdev_delay.so.6.0 00:04:25.248 SO libspdk_bdev_gpt.so.6.0 00:04:25.248 CC module/bdev/raid/bdev_raid_rpc.o 00:04:25.248 LIB libspdk_bdev_null.a 00:04:25.248 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:25.248 SYMLINK libspdk_bdev_delay.so 00:04:25.248 CC module/bdev/raid/bdev_raid_sb.o 00:04:25.248 LIB libspdk_bdev_malloc.a 00:04:25.248 SYMLINK libspdk_bdev_gpt.so 00:04:25.248 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:25.507 SO libspdk_bdev_null.so.6.0 00:04:25.507 SO libspdk_bdev_malloc.so.6.0 00:04:25.507 SYMLINK libspdk_bdev_null.so 00:04:25.507 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:25.507 SYMLINK libspdk_bdev_malloc.so 00:04:25.507 CC module/bdev/nvme/nvme_rpc.o 00:04:25.507 CC module/bdev/split/vbdev_split_rpc.o 00:04:25.507 CC module/bdev/nvme/bdev_mdns_client.o 00:04:25.507 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:25.507 LIB libspdk_bdev_passthru.a 00:04:25.507 CC module/bdev/raid/raid0.o 00:04:25.507 SO libspdk_bdev_passthru.so.6.0 00:04:25.766 LIB libspdk_bdev_split.a 00:04:25.766 SYMLINK libspdk_bdev_passthru.so 00:04:25.766 CC module/bdev/nvme/vbdev_opal.o 00:04:25.766 SO libspdk_bdev_split.so.6.0 00:04:25.766 LIB libspdk_bdev_lvol.a 00:04:25.766 SO libspdk_bdev_lvol.so.6.0 00:04:25.766 SYMLINK libspdk_bdev_split.so 00:04:25.766 SYMLINK libspdk_bdev_lvol.so 00:04:25.766 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:26.026 CC module/bdev/aio/bdev_aio.o 00:04:26.026 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:26.026 CC module/bdev/ftl/bdev_ftl.o 00:04:26.026 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:26.026 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:26.026 CC module/bdev/iscsi/bdev_iscsi.o 00:04:26.026 CC module/bdev/raid/raid1.o 00:04:26.026 LIB libspdk_bdev_zone_block.a 00:04:26.026 SO libspdk_bdev_zone_block.so.6.0 00:04:26.026 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:26.026 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:26.026 SYMLINK libspdk_bdev_zone_block.so 00:04:26.026 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:26.285 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:26.285 CC module/bdev/aio/bdev_aio_rpc.o 00:04:26.285 CC module/bdev/raid/concat.o 00:04:26.285 LIB libspdk_bdev_ftl.a 00:04:26.285 CC module/bdev/raid/raid5f.o 00:04:26.285 SO libspdk_bdev_ftl.so.6.0 00:04:26.285 LIB libspdk_bdev_iscsi.a 00:04:26.285 SYMLINK libspdk_bdev_ftl.so 00:04:26.285 SO libspdk_bdev_iscsi.so.6.0 00:04:26.285 LIB libspdk_bdev_aio.a 00:04:26.285 SO libspdk_bdev_aio.so.6.0 00:04:26.545 SYMLINK libspdk_bdev_iscsi.so 00:04:26.545 SYMLINK libspdk_bdev_aio.so 00:04:26.545 LIB libspdk_bdev_virtio.a 00:04:26.545 SO libspdk_bdev_virtio.so.6.0 00:04:26.545 SYMLINK libspdk_bdev_virtio.so 00:04:26.805 LIB libspdk_bdev_raid.a 00:04:26.805 SO libspdk_bdev_raid.so.6.0 00:04:27.065 SYMLINK libspdk_bdev_raid.so 00:04:27.325 LIB libspdk_bdev_nvme.a 00:04:27.325 SO libspdk_bdev_nvme.so.7.0 00:04:27.585 SYMLINK libspdk_bdev_nvme.so 00:04:28.154 CC module/event/subsystems/sock/sock.o 00:04:28.154 CC module/event/subsystems/iobuf/iobuf.o 00:04:28.154 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:28.154 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:28.154 CC module/event/subsystems/fsdev/fsdev.o 00:04:28.154 CC module/event/subsystems/vmd/vmd.o 00:04:28.154 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:28.154 CC module/event/subsystems/keyring/keyring.o 00:04:28.154 CC module/event/subsystems/scheduler/scheduler.o 00:04:28.154 LIB libspdk_event_fsdev.a 00:04:28.154 LIB libspdk_event_vhost_blk.a 00:04:28.154 LIB libspdk_event_keyring.a 00:04:28.154 LIB libspdk_event_vmd.a 00:04:28.154 LIB libspdk_event_iobuf.a 00:04:28.154 LIB libspdk_event_sock.a 00:04:28.154 LIB libspdk_event_scheduler.a 00:04:28.154 SO libspdk_event_fsdev.so.1.0 00:04:28.154 SO libspdk_event_keyring.so.1.0 00:04:28.154 SO libspdk_event_vhost_blk.so.3.0 00:04:28.154 SO libspdk_event_vmd.so.6.0 00:04:28.154 SO libspdk_event_iobuf.so.3.0 00:04:28.154 SO libspdk_event_scheduler.so.4.0 00:04:28.154 SO libspdk_event_sock.so.5.0 00:04:28.154 SYMLINK libspdk_event_fsdev.so 00:04:28.154 SYMLINK libspdk_event_keyring.so 00:04:28.154 SYMLINK libspdk_event_vhost_blk.so 00:04:28.154 SYMLINK libspdk_event_sock.so 00:04:28.154 SYMLINK libspdk_event_scheduler.so 00:04:28.154 SYMLINK libspdk_event_vmd.so 00:04:28.154 SYMLINK libspdk_event_iobuf.so 00:04:28.724 CC module/event/subsystems/accel/accel.o 00:04:28.724 LIB libspdk_event_accel.a 00:04:28.985 SO libspdk_event_accel.so.6.0 00:04:28.985 SYMLINK libspdk_event_accel.so 00:04:29.245 CC module/event/subsystems/bdev/bdev.o 00:04:29.506 LIB libspdk_event_bdev.a 00:04:29.506 SO libspdk_event_bdev.so.6.0 00:04:29.506 SYMLINK libspdk_event_bdev.so 00:04:30.077 CC module/event/subsystems/scsi/scsi.o 00:04:30.077 CC module/event/subsystems/nbd/nbd.o 00:04:30.077 CC module/event/subsystems/ublk/ublk.o 00:04:30.077 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:30.077 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:30.077 LIB libspdk_event_ublk.a 00:04:30.077 LIB libspdk_event_scsi.a 00:04:30.077 LIB libspdk_event_nbd.a 00:04:30.077 SO libspdk_event_ublk.so.3.0 00:04:30.077 SO libspdk_event_scsi.so.6.0 00:04:30.077 SO libspdk_event_nbd.so.6.0 00:04:30.077 SYMLINK libspdk_event_ublk.so 00:04:30.077 SYMLINK libspdk_event_scsi.so 00:04:30.337 LIB libspdk_event_nvmf.a 00:04:30.337 SYMLINK libspdk_event_nbd.so 00:04:30.337 SO libspdk_event_nvmf.so.6.0 00:04:30.337 SYMLINK libspdk_event_nvmf.so 00:04:30.605 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:30.605 CC module/event/subsystems/iscsi/iscsi.o 00:04:30.605 LIB libspdk_event_vhost_scsi.a 00:04:30.605 LIB libspdk_event_iscsi.a 00:04:30.605 SO libspdk_event_vhost_scsi.so.3.0 00:04:30.605 SO libspdk_event_iscsi.so.6.0 00:04:30.892 SYMLINK libspdk_event_vhost_scsi.so 00:04:30.892 SYMLINK libspdk_event_iscsi.so 00:04:30.892 SO libspdk.so.6.0 00:04:30.892 SYMLINK libspdk.so 00:04:31.477 CXX app/trace/trace.o 00:04:31.477 CC app/trace_record/trace_record.o 00:04:31.477 TEST_HEADER include/spdk/accel.h 00:04:31.477 TEST_HEADER include/spdk/accel_module.h 00:04:31.477 TEST_HEADER include/spdk/assert.h 00:04:31.477 TEST_HEADER include/spdk/barrier.h 00:04:31.477 TEST_HEADER include/spdk/base64.h 00:04:31.477 TEST_HEADER include/spdk/bdev.h 00:04:31.477 TEST_HEADER include/spdk/bdev_module.h 00:04:31.477 TEST_HEADER include/spdk/bdev_zone.h 00:04:31.477 TEST_HEADER include/spdk/bit_array.h 00:04:31.477 TEST_HEADER include/spdk/bit_pool.h 00:04:31.477 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:31.477 TEST_HEADER include/spdk/blob_bdev.h 00:04:31.477 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:31.477 TEST_HEADER include/spdk/blobfs.h 00:04:31.477 TEST_HEADER include/spdk/blob.h 00:04:31.477 TEST_HEADER include/spdk/conf.h 00:04:31.477 TEST_HEADER include/spdk/config.h 00:04:31.477 TEST_HEADER include/spdk/cpuset.h 00:04:31.477 TEST_HEADER include/spdk/crc16.h 00:04:31.477 TEST_HEADER include/spdk/crc32.h 00:04:31.477 TEST_HEADER include/spdk/crc64.h 00:04:31.477 TEST_HEADER include/spdk/dif.h 00:04:31.477 TEST_HEADER include/spdk/dma.h 00:04:31.477 TEST_HEADER include/spdk/endian.h 00:04:31.477 TEST_HEADER include/spdk/env_dpdk.h 00:04:31.477 TEST_HEADER include/spdk/env.h 00:04:31.477 TEST_HEADER include/spdk/event.h 00:04:31.477 TEST_HEADER include/spdk/fd_group.h 00:04:31.477 TEST_HEADER include/spdk/fd.h 00:04:31.477 TEST_HEADER include/spdk/file.h 00:04:31.477 TEST_HEADER include/spdk/fsdev.h 00:04:31.477 TEST_HEADER include/spdk/fsdev_module.h 00:04:31.477 CC examples/ioat/perf/perf.o 00:04:31.477 TEST_HEADER include/spdk/ftl.h 00:04:31.477 CC test/thread/poller_perf/poller_perf.o 00:04:31.477 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:31.477 TEST_HEADER include/spdk/gpt_spec.h 00:04:31.477 CC examples/util/zipf/zipf.o 00:04:31.477 TEST_HEADER include/spdk/hexlify.h 00:04:31.477 TEST_HEADER include/spdk/histogram_data.h 00:04:31.477 TEST_HEADER include/spdk/idxd.h 00:04:31.477 TEST_HEADER include/spdk/idxd_spec.h 00:04:31.477 TEST_HEADER include/spdk/init.h 00:04:31.477 TEST_HEADER include/spdk/ioat.h 00:04:31.477 TEST_HEADER include/spdk/ioat_spec.h 00:04:31.477 TEST_HEADER include/spdk/iscsi_spec.h 00:04:31.477 TEST_HEADER include/spdk/json.h 00:04:31.477 TEST_HEADER include/spdk/jsonrpc.h 00:04:31.477 TEST_HEADER include/spdk/keyring.h 00:04:31.477 TEST_HEADER include/spdk/keyring_module.h 00:04:31.477 TEST_HEADER include/spdk/likely.h 00:04:31.477 TEST_HEADER include/spdk/log.h 00:04:31.477 TEST_HEADER include/spdk/lvol.h 00:04:31.477 CC test/app/bdev_svc/bdev_svc.o 00:04:31.477 TEST_HEADER include/spdk/md5.h 00:04:31.477 TEST_HEADER include/spdk/memory.h 00:04:31.477 TEST_HEADER include/spdk/mmio.h 00:04:31.477 TEST_HEADER include/spdk/nbd.h 00:04:31.477 TEST_HEADER include/spdk/net.h 00:04:31.477 TEST_HEADER include/spdk/notify.h 00:04:31.477 TEST_HEADER include/spdk/nvme.h 00:04:31.477 TEST_HEADER include/spdk/nvme_intel.h 00:04:31.477 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:31.477 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:31.477 TEST_HEADER include/spdk/nvme_spec.h 00:04:31.477 CC test/dma/test_dma/test_dma.o 00:04:31.477 TEST_HEADER include/spdk/nvme_zns.h 00:04:31.477 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:31.477 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:31.477 TEST_HEADER include/spdk/nvmf.h 00:04:31.477 TEST_HEADER include/spdk/nvmf_spec.h 00:04:31.477 TEST_HEADER include/spdk/nvmf_transport.h 00:04:31.477 TEST_HEADER include/spdk/opal.h 00:04:31.477 TEST_HEADER include/spdk/opal_spec.h 00:04:31.477 TEST_HEADER include/spdk/pci_ids.h 00:04:31.477 TEST_HEADER include/spdk/pipe.h 00:04:31.477 CC test/env/mem_callbacks/mem_callbacks.o 00:04:31.477 TEST_HEADER include/spdk/queue.h 00:04:31.477 TEST_HEADER include/spdk/reduce.h 00:04:31.477 TEST_HEADER include/spdk/rpc.h 00:04:31.477 TEST_HEADER include/spdk/scheduler.h 00:04:31.477 TEST_HEADER include/spdk/scsi.h 00:04:31.477 TEST_HEADER include/spdk/scsi_spec.h 00:04:31.477 TEST_HEADER include/spdk/sock.h 00:04:31.477 TEST_HEADER include/spdk/stdinc.h 00:04:31.477 TEST_HEADER include/spdk/string.h 00:04:31.477 TEST_HEADER include/spdk/thread.h 00:04:31.477 TEST_HEADER include/spdk/trace.h 00:04:31.477 TEST_HEADER include/spdk/trace_parser.h 00:04:31.477 TEST_HEADER include/spdk/tree.h 00:04:31.477 TEST_HEADER include/spdk/ublk.h 00:04:31.477 TEST_HEADER include/spdk/util.h 00:04:31.478 TEST_HEADER include/spdk/uuid.h 00:04:31.478 TEST_HEADER include/spdk/version.h 00:04:31.478 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:31.478 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:31.478 TEST_HEADER include/spdk/vhost.h 00:04:31.478 TEST_HEADER include/spdk/vmd.h 00:04:31.478 TEST_HEADER include/spdk/xor.h 00:04:31.478 TEST_HEADER include/spdk/zipf.h 00:04:31.478 CXX test/cpp_headers/accel.o 00:04:31.478 LINK interrupt_tgt 00:04:31.478 LINK poller_perf 00:04:31.478 LINK zipf 00:04:31.478 LINK spdk_trace_record 00:04:31.478 LINK ioat_perf 00:04:31.478 LINK bdev_svc 00:04:31.736 CXX test/cpp_headers/accel_module.o 00:04:31.736 LINK spdk_trace 00:04:31.736 CC test/env/vtophys/vtophys.o 00:04:31.736 CC examples/ioat/verify/verify.o 00:04:31.736 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:31.736 CC test/app/histogram_perf/histogram_perf.o 00:04:31.736 CXX test/cpp_headers/assert.o 00:04:31.736 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:31.736 LINK vtophys 00:04:31.996 LINK test_dma 00:04:31.996 LINK histogram_perf 00:04:31.996 LINK env_dpdk_post_init 00:04:31.996 CC examples/thread/thread/thread_ex.o 00:04:31.996 LINK mem_callbacks 00:04:31.996 CC app/nvmf_tgt/nvmf_main.o 00:04:31.996 CXX test/cpp_headers/barrier.o 00:04:31.996 LINK verify 00:04:31.996 CXX test/cpp_headers/base64.o 00:04:31.996 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:31.996 LINK nvmf_tgt 00:04:31.996 CC test/app/jsoncat/jsoncat.o 00:04:31.996 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:32.256 CC test/env/memory/memory_ut.o 00:04:32.256 CC test/app/stub/stub.o 00:04:32.256 LINK thread 00:04:32.256 CXX test/cpp_headers/bdev.o 00:04:32.256 CC examples/sock/hello_world/hello_sock.o 00:04:32.256 LINK nvme_fuzz 00:04:32.256 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:32.256 LINK jsoncat 00:04:32.256 LINK stub 00:04:32.256 CC test/env/pci/pci_ut.o 00:04:32.514 CXX test/cpp_headers/bdev_module.o 00:04:32.514 CC app/iscsi_tgt/iscsi_tgt.o 00:04:32.514 LINK hello_sock 00:04:32.514 CC examples/vmd/lsvmd/lsvmd.o 00:04:32.514 CC examples/idxd/perf/perf.o 00:04:32.514 CC examples/vmd/led/led.o 00:04:32.514 CXX test/cpp_headers/bdev_zone.o 00:04:32.514 LINK iscsi_tgt 00:04:32.773 LINK vhost_fuzz 00:04:32.773 LINK lsvmd 00:04:32.773 LINK led 00:04:32.773 CXX test/cpp_headers/bit_array.o 00:04:32.773 LINK pci_ut 00:04:32.773 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:32.773 LINK idxd_perf 00:04:32.773 CC test/rpc_client/rpc_client_test.o 00:04:32.773 CC app/spdk_lspci/spdk_lspci.o 00:04:33.032 CC app/spdk_nvme_perf/perf.o 00:04:33.032 CXX test/cpp_headers/bit_pool.o 00:04:33.032 CXX test/cpp_headers/blob_bdev.o 00:04:33.032 CC app/spdk_tgt/spdk_tgt.o 00:04:33.032 LINK spdk_lspci 00:04:33.032 CXX test/cpp_headers/blobfs_bdev.o 00:04:33.032 LINK hello_fsdev 00:04:33.032 LINK rpc_client_test 00:04:33.032 CXX test/cpp_headers/blobfs.o 00:04:33.032 CC app/spdk_nvme_identify/identify.o 00:04:33.032 CXX test/cpp_headers/blob.o 00:04:33.032 LINK memory_ut 00:04:33.292 LINK spdk_tgt 00:04:33.292 CXX test/cpp_headers/conf.o 00:04:33.292 CXX test/cpp_headers/config.o 00:04:33.292 CXX test/cpp_headers/cpuset.o 00:04:33.292 CXX test/cpp_headers/crc16.o 00:04:33.292 CXX test/cpp_headers/crc32.o 00:04:33.292 CXX test/cpp_headers/crc64.o 00:04:33.292 CC app/spdk_nvme_discover/discovery_aer.o 00:04:33.292 CXX test/cpp_headers/dif.o 00:04:33.549 CC examples/accel/perf/accel_perf.o 00:04:33.549 CXX test/cpp_headers/dma.o 00:04:33.549 LINK spdk_nvme_discover 00:04:33.549 CC test/accel/dif/dif.o 00:04:33.549 CC test/blobfs/mkfs/mkfs.o 00:04:33.549 CC test/event/event_perf/event_perf.o 00:04:33.549 CXX test/cpp_headers/endian.o 00:04:33.808 CC test/lvol/esnap/esnap.o 00:04:33.808 LINK spdk_nvme_perf 00:04:33.808 LINK event_perf 00:04:33.808 CXX test/cpp_headers/env_dpdk.o 00:04:33.808 LINK mkfs 00:04:33.808 LINK iscsi_fuzz 00:04:33.808 CC test/nvme/aer/aer.o 00:04:33.808 CXX test/cpp_headers/env.o 00:04:34.067 LINK accel_perf 00:04:34.067 CC test/event/reactor/reactor.o 00:04:34.067 CC app/spdk_top/spdk_top.o 00:04:34.067 LINK spdk_nvme_identify 00:04:34.067 CXX test/cpp_headers/event.o 00:04:34.067 CXX test/cpp_headers/fd_group.o 00:04:34.067 LINK reactor 00:04:34.067 CC test/nvme/reset/reset.o 00:04:34.067 LINK aer 00:04:34.327 CXX test/cpp_headers/fd.o 00:04:34.327 LINK dif 00:04:34.327 CC examples/blob/hello_world/hello_blob.o 00:04:34.327 CC test/event/reactor_perf/reactor_perf.o 00:04:34.327 CC app/vhost/vhost.o 00:04:34.327 CC app/spdk_dd/spdk_dd.o 00:04:34.327 CXX test/cpp_headers/file.o 00:04:34.327 LINK reset 00:04:34.327 CC test/event/app_repeat/app_repeat.o 00:04:34.587 LINK reactor_perf 00:04:34.587 LINK hello_blob 00:04:34.587 CXX test/cpp_headers/fsdev.o 00:04:34.587 LINK vhost 00:04:34.587 CC test/event/scheduler/scheduler.o 00:04:34.587 LINK app_repeat 00:04:34.587 CC test/nvme/sgl/sgl.o 00:04:34.587 CXX test/cpp_headers/fsdev_module.o 00:04:34.587 CC test/nvme/e2edp/nvme_dp.o 00:04:34.587 CXX test/cpp_headers/ftl.o 00:04:34.587 CXX test/cpp_headers/fuse_dispatcher.o 00:04:34.587 LINK spdk_dd 00:04:34.847 LINK scheduler 00:04:34.847 CC examples/blob/cli/blobcli.o 00:04:34.847 CC test/nvme/overhead/overhead.o 00:04:34.847 CXX test/cpp_headers/gpt_spec.o 00:04:34.847 CXX test/cpp_headers/hexlify.o 00:04:34.847 LINK sgl 00:04:34.847 CC test/nvme/err_injection/err_injection.o 00:04:34.847 LINK nvme_dp 00:04:34.847 LINK spdk_top 00:04:35.107 CXX test/cpp_headers/histogram_data.o 00:04:35.107 CC test/nvme/startup/startup.o 00:04:35.107 CXX test/cpp_headers/idxd.o 00:04:35.107 LINK err_injection 00:04:35.107 CC test/nvme/reserve/reserve.o 00:04:35.107 CXX test/cpp_headers/idxd_spec.o 00:04:35.107 LINK overhead 00:04:35.107 LINK startup 00:04:35.107 CC test/nvme/simple_copy/simple_copy.o 00:04:35.366 CXX test/cpp_headers/init.o 00:04:35.366 CC app/fio/nvme/fio_plugin.o 00:04:35.366 CC test/nvme/connect_stress/connect_stress.o 00:04:35.366 CXX test/cpp_headers/ioat.o 00:04:35.366 LINK reserve 00:04:35.366 LINK blobcli 00:04:35.366 CXX test/cpp_headers/ioat_spec.o 00:04:35.366 CXX test/cpp_headers/iscsi_spec.o 00:04:35.366 CC test/bdev/bdevio/bdevio.o 00:04:35.366 LINK connect_stress 00:04:35.366 LINK simple_copy 00:04:35.625 CC examples/nvme/hello_world/hello_world.o 00:04:35.625 CC test/nvme/boot_partition/boot_partition.o 00:04:35.625 CC examples/nvme/reconnect/reconnect.o 00:04:35.625 CXX test/cpp_headers/json.o 00:04:35.625 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:35.625 CC examples/nvme/arbitration/arbitration.o 00:04:35.625 CXX test/cpp_headers/jsonrpc.o 00:04:35.625 LINK boot_partition 00:04:35.625 CC examples/nvme/hotplug/hotplug.o 00:04:35.625 LINK hello_world 00:04:35.884 LINK bdevio 00:04:35.884 LINK spdk_nvme 00:04:35.884 CXX test/cpp_headers/keyring.o 00:04:35.884 LINK reconnect 00:04:35.884 LINK hotplug 00:04:35.884 CC test/nvme/compliance/nvme_compliance.o 00:04:35.884 CXX test/cpp_headers/keyring_module.o 00:04:35.884 CXX test/cpp_headers/likely.o 00:04:35.884 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:35.884 LINK arbitration 00:04:35.884 CC app/fio/bdev/fio_plugin.o 00:04:36.144 LINK nvme_manage 00:04:36.144 CXX test/cpp_headers/log.o 00:04:36.144 CXX test/cpp_headers/lvol.o 00:04:36.144 CXX test/cpp_headers/md5.o 00:04:36.144 LINK cmb_copy 00:04:36.144 CC examples/nvme/abort/abort.o 00:04:36.144 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:36.144 LINK nvme_compliance 00:04:36.402 CC test/nvme/fused_ordering/fused_ordering.o 00:04:36.402 CXX test/cpp_headers/memory.o 00:04:36.402 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:36.402 CC test/nvme/fdp/fdp.o 00:04:36.402 CC test/nvme/cuse/cuse.o 00:04:36.402 LINK pmr_persistence 00:04:36.402 CXX test/cpp_headers/mmio.o 00:04:36.402 LINK doorbell_aers 00:04:36.402 LINK spdk_bdev 00:04:36.402 LINK fused_ordering 00:04:36.402 LINK abort 00:04:36.661 CXX test/cpp_headers/nbd.o 00:04:36.661 CC examples/bdev/hello_world/hello_bdev.o 00:04:36.661 CXX test/cpp_headers/net.o 00:04:36.661 CXX test/cpp_headers/notify.o 00:04:36.661 CXX test/cpp_headers/nvme.o 00:04:36.661 CC examples/bdev/bdevperf/bdevperf.o 00:04:36.661 LINK fdp 00:04:36.661 CXX test/cpp_headers/nvme_intel.o 00:04:36.661 CXX test/cpp_headers/nvme_ocssd.o 00:04:36.661 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:36.661 CXX test/cpp_headers/nvme_spec.o 00:04:36.661 CXX test/cpp_headers/nvme_zns.o 00:04:36.661 CXX test/cpp_headers/nvmf_cmd.o 00:04:36.661 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:36.921 LINK hello_bdev 00:04:36.921 CXX test/cpp_headers/nvmf.o 00:04:36.921 CXX test/cpp_headers/nvmf_spec.o 00:04:36.921 CXX test/cpp_headers/nvmf_transport.o 00:04:36.921 CXX test/cpp_headers/opal.o 00:04:36.921 CXX test/cpp_headers/opal_spec.o 00:04:36.921 CXX test/cpp_headers/pci_ids.o 00:04:36.921 CXX test/cpp_headers/pipe.o 00:04:36.921 CXX test/cpp_headers/queue.o 00:04:36.921 CXX test/cpp_headers/reduce.o 00:04:36.921 CXX test/cpp_headers/rpc.o 00:04:37.180 CXX test/cpp_headers/scheduler.o 00:04:37.180 CXX test/cpp_headers/scsi.o 00:04:37.180 CXX test/cpp_headers/scsi_spec.o 00:04:37.180 CXX test/cpp_headers/sock.o 00:04:37.180 CXX test/cpp_headers/stdinc.o 00:04:37.180 CXX test/cpp_headers/string.o 00:04:37.180 CXX test/cpp_headers/thread.o 00:04:37.180 CXX test/cpp_headers/trace.o 00:04:37.180 CXX test/cpp_headers/trace_parser.o 00:04:37.180 CXX test/cpp_headers/tree.o 00:04:37.180 CXX test/cpp_headers/ublk.o 00:04:37.180 CXX test/cpp_headers/util.o 00:04:37.180 CXX test/cpp_headers/uuid.o 00:04:37.439 CXX test/cpp_headers/version.o 00:04:37.439 CXX test/cpp_headers/vfio_user_pci.o 00:04:37.439 CXX test/cpp_headers/vfio_user_spec.o 00:04:37.439 CXX test/cpp_headers/vhost.o 00:04:37.439 CXX test/cpp_headers/vmd.o 00:04:37.439 LINK bdevperf 00:04:37.439 CXX test/cpp_headers/xor.o 00:04:37.439 CXX test/cpp_headers/zipf.o 00:04:37.439 LINK cuse 00:04:38.009 CC examples/nvmf/nvmf/nvmf.o 00:04:38.269 LINK nvmf 00:04:39.206 LINK esnap 00:04:39.776 00:04:39.776 real 1m15.122s 00:04:39.776 user 5m36.048s 00:04:39.776 sys 1m10.890s 00:04:39.776 16:17:31 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:39.776 16:17:31 make -- common/autotest_common.sh@10 -- $ set +x 00:04:39.776 ************************************ 00:04:39.776 END TEST make 00:04:39.776 ************************************ 00:04:39.776 16:17:31 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:39.776 16:17:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:39.776 16:17:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:39.776 16:17:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.776 16:17:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:39.776 16:17:31 -- pm/common@44 -- $ pid=6197 00:04:39.776 16:17:31 -- pm/common@50 -- $ kill -TERM 6197 00:04:39.776 16:17:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.776 16:17:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:39.776 16:17:31 -- pm/common@44 -- $ pid=6199 00:04:39.776 16:17:31 -- pm/common@50 -- $ kill -TERM 6199 00:04:39.777 16:17:31 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:39.777 16:17:31 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:39.777 16:17:31 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:39.777 16:17:31 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:39.777 16:17:31 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.777 16:17:31 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.777 16:17:31 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.777 16:17:31 -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.777 16:17:31 -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.777 16:17:31 -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.777 16:17:31 -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.777 16:17:31 -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.777 16:17:31 -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.777 16:17:31 -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.777 16:17:31 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.777 16:17:31 -- scripts/common.sh@344 -- # case "$op" in 00:04:39.777 16:17:31 -- scripts/common.sh@345 -- # : 1 00:04:39.777 16:17:31 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.777 16:17:31 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.777 16:17:31 -- scripts/common.sh@365 -- # decimal 1 00:04:39.777 16:17:31 -- scripts/common.sh@353 -- # local d=1 00:04:39.777 16:17:31 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.777 16:17:31 -- scripts/common.sh@355 -- # echo 1 00:04:39.777 16:17:31 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.777 16:17:31 -- scripts/common.sh@366 -- # decimal 2 00:04:39.777 16:17:31 -- scripts/common.sh@353 -- # local d=2 00:04:39.777 16:17:31 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.777 16:17:31 -- scripts/common.sh@355 -- # echo 2 00:04:39.777 16:17:31 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.777 16:17:31 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.777 16:17:31 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.777 16:17:31 -- scripts/common.sh@368 -- # return 0 00:04:39.777 16:17:31 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.777 16:17:31 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:39.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.777 --rc genhtml_branch_coverage=1 00:04:39.777 --rc genhtml_function_coverage=1 00:04:39.777 --rc genhtml_legend=1 00:04:39.777 --rc geninfo_all_blocks=1 00:04:39.777 --rc geninfo_unexecuted_blocks=1 00:04:39.777 00:04:39.777 ' 00:04:39.777 16:17:31 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:39.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.777 --rc genhtml_branch_coverage=1 00:04:39.777 --rc genhtml_function_coverage=1 00:04:39.777 --rc genhtml_legend=1 00:04:39.777 --rc geninfo_all_blocks=1 00:04:39.777 --rc geninfo_unexecuted_blocks=1 00:04:39.777 00:04:39.777 ' 00:04:39.777 16:17:31 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:39.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.777 --rc genhtml_branch_coverage=1 00:04:39.777 --rc genhtml_function_coverage=1 00:04:39.777 --rc genhtml_legend=1 00:04:39.777 --rc geninfo_all_blocks=1 00:04:39.777 --rc geninfo_unexecuted_blocks=1 00:04:39.777 00:04:39.777 ' 00:04:39.777 16:17:31 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:39.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.777 --rc genhtml_branch_coverage=1 00:04:39.777 --rc genhtml_function_coverage=1 00:04:39.777 --rc genhtml_legend=1 00:04:39.777 --rc geninfo_all_blocks=1 00:04:39.777 --rc geninfo_unexecuted_blocks=1 00:04:39.777 00:04:39.777 ' 00:04:39.777 16:17:31 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:39.777 16:17:31 -- nvmf/common.sh@7 -- # uname -s 00:04:39.777 16:17:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:39.777 16:17:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:39.777 16:17:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:39.777 16:17:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:39.777 16:17:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:39.777 16:17:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:39.777 16:17:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:39.777 16:17:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:39.777 16:17:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:40.038 16:17:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:40.038 16:17:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58a8556f-8060-4dbf-98a4-c4a47e6467c0 00:04:40.038 16:17:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=58a8556f-8060-4dbf-98a4-c4a47e6467c0 00:04:40.038 16:17:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:40.038 16:17:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:40.038 16:17:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:40.038 16:17:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:40.038 16:17:31 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:40.038 16:17:31 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:40.038 16:17:31 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:40.038 16:17:31 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:40.038 16:17:31 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:40.038 16:17:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.038 16:17:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.038 16:17:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.038 16:17:31 -- paths/export.sh@5 -- # export PATH 00:04:40.038 16:17:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.038 16:17:31 -- nvmf/common.sh@51 -- # : 0 00:04:40.038 16:17:31 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:40.038 16:17:31 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:40.038 16:17:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:40.038 16:17:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:40.038 16:17:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:40.038 16:17:31 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:40.038 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:40.038 16:17:31 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:40.038 16:17:31 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:40.038 16:17:31 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:40.038 16:17:31 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:40.038 16:17:31 -- spdk/autotest.sh@32 -- # uname -s 00:04:40.038 16:17:31 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:40.038 16:17:31 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:40.038 16:17:31 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:40.038 16:17:31 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:40.038 16:17:31 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:40.038 16:17:31 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:40.038 16:17:31 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:40.038 16:17:31 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:40.038 16:17:31 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:40.038 16:17:31 -- spdk/autotest.sh@48 -- # udevadm_pid=66785 00:04:40.038 16:17:31 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:40.038 16:17:31 -- pm/common@17 -- # local monitor 00:04:40.038 16:17:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:40.038 16:17:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:40.038 16:17:31 -- pm/common@25 -- # sleep 1 00:04:40.038 16:17:31 -- pm/common@21 -- # date +%s 00:04:40.038 16:17:31 -- pm/common@21 -- # date +%s 00:04:40.038 16:17:31 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732810651 00:04:40.038 16:17:31 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732810651 00:04:40.038 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732810651_collect-vmstat.pm.log 00:04:40.038 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732810651_collect-cpu-load.pm.log 00:04:40.979 16:17:32 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:40.979 16:17:32 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:40.979 16:17:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:40.979 16:17:32 -- common/autotest_common.sh@10 -- # set +x 00:04:40.979 16:17:32 -- spdk/autotest.sh@59 -- # create_test_list 00:04:40.979 16:17:32 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:40.979 16:17:32 -- common/autotest_common.sh@10 -- # set +x 00:04:40.979 16:17:32 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:40.979 16:17:32 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:40.979 16:17:32 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:40.979 16:17:32 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:40.979 16:17:32 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:40.979 16:17:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:40.979 16:17:32 -- common/autotest_common.sh@1455 -- # uname 00:04:40.979 16:17:32 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:40.979 16:17:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:40.979 16:17:32 -- common/autotest_common.sh@1475 -- # uname 00:04:40.979 16:17:32 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:40.979 16:17:32 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:40.979 16:17:32 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:41.240 lcov: LCOV version 1.15 00:04:41.240 16:17:32 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:56.129 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:56.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:11.056 16:18:01 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:11.056 16:18:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:11.056 16:18:01 -- common/autotest_common.sh@10 -- # set +x 00:05:11.056 16:18:01 -- spdk/autotest.sh@78 -- # rm -f 00:05:11.056 16:18:01 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:11.056 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.056 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:11.056 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:11.056 16:18:02 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:11.056 16:18:02 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:11.056 16:18:02 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:11.056 16:18:02 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:11.056 16:18:02 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:11.056 16:18:02 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:11.056 16:18:02 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:11.056 16:18:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:11.056 16:18:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:11.056 16:18:02 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:11.056 16:18:02 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:11.056 16:18:02 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:11.056 16:18:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:11.056 16:18:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:11.056 16:18:02 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:11.056 16:18:02 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:11.056 16:18:02 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:11.056 16:18:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:11.056 16:18:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:11.056 16:18:02 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:11.056 16:18:02 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:11.056 16:18:02 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:11.056 16:18:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:11.057 16:18:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:11.057 16:18:02 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:11.057 16:18:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:11.057 16:18:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:11.057 16:18:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:11.057 16:18:02 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:11.057 16:18:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:11.316 No valid GPT data, bailing 00:05:11.316 16:18:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:11.316 16:18:02 -- scripts/common.sh@394 -- # pt= 00:05:11.317 16:18:02 -- scripts/common.sh@395 -- # return 1 00:05:11.317 16:18:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:11.317 1+0 records in 00:05:11.317 1+0 records out 00:05:11.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00595992 s, 176 MB/s 00:05:11.317 16:18:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:11.317 16:18:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:11.317 16:18:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:11.317 16:18:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:11.317 16:18:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:11.317 No valid GPT data, bailing 00:05:11.317 16:18:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:11.317 16:18:02 -- scripts/common.sh@394 -- # pt= 00:05:11.317 16:18:02 -- scripts/common.sh@395 -- # return 1 00:05:11.317 16:18:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:11.317 1+0 records in 00:05:11.317 1+0 records out 00:05:11.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00320318 s, 327 MB/s 00:05:11.317 16:18:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:11.317 16:18:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:11.317 16:18:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:11.317 16:18:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:11.317 16:18:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:11.317 No valid GPT data, bailing 00:05:11.317 16:18:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:11.317 16:18:03 -- scripts/common.sh@394 -- # pt= 00:05:11.317 16:18:03 -- scripts/common.sh@395 -- # return 1 00:05:11.317 16:18:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:11.317 1+0 records in 00:05:11.317 1+0 records out 00:05:11.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436276 s, 240 MB/s 00:05:11.317 16:18:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:11.317 16:18:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:11.317 16:18:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:11.317 16:18:03 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:11.317 16:18:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:11.317 No valid GPT data, bailing 00:05:11.577 16:18:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:11.577 16:18:03 -- scripts/common.sh@394 -- # pt= 00:05:11.577 16:18:03 -- scripts/common.sh@395 -- # return 1 00:05:11.577 16:18:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:11.577 1+0 records in 00:05:11.577 1+0 records out 00:05:11.577 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0055028 s, 191 MB/s 00:05:11.577 16:18:03 -- spdk/autotest.sh@105 -- # sync 00:05:11.577 16:18:03 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:11.577 16:18:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:11.577 16:18:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:14.117 16:18:05 -- spdk/autotest.sh@111 -- # uname -s 00:05:14.117 16:18:05 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:14.117 16:18:05 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:14.117 16:18:05 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:15.057 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.057 Hugepages 00:05:15.057 node hugesize free / total 00:05:15.057 node0 1048576kB 0 / 0 00:05:15.057 node0 2048kB 0 / 0 00:05:15.057 00:05:15.057 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:15.057 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:15.317 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:15.317 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:15.317 16:18:06 -- spdk/autotest.sh@117 -- # uname -s 00:05:15.317 16:18:06 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:15.317 16:18:06 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:15.317 16:18:06 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:16.312 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:16.312 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.312 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.312 16:18:07 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:17.250 16:18:08 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:17.250 16:18:08 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:17.250 16:18:08 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:17.250 16:18:08 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:17.250 16:18:08 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:17.250 16:18:08 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:17.250 16:18:08 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:17.250 16:18:08 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:17.250 16:18:08 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:17.510 16:18:09 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:17.510 16:18:09 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:17.510 16:18:09 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:18.080 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.080 Waiting for block devices as requested 00:05:18.080 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:18.080 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:18.080 16:18:09 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:18.080 16:18:09 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:18.080 16:18:09 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:18.080 16:18:09 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:18.080 16:18:09 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:18.080 16:18:09 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:18.080 16:18:09 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:18.080 16:18:09 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:18.080 16:18:09 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:18.080 16:18:09 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:18.080 16:18:09 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:18.080 16:18:09 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:18.080 16:18:09 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:18.080 16:18:09 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:18.080 16:18:09 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:18.080 16:18:09 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:18.080 16:18:09 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:18.080 16:18:09 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:18.080 16:18:09 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:18.340 16:18:09 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:18.340 16:18:09 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:18.340 16:18:09 -- common/autotest_common.sh@1541 -- # continue 00:05:18.340 16:18:09 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:18.340 16:18:09 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:18.340 16:18:09 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:18.340 16:18:09 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:18.340 16:18:09 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:18.340 16:18:09 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:18.340 16:18:09 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:18.341 16:18:09 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:18.341 16:18:09 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:18.341 16:18:09 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:18.341 16:18:09 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:18.341 16:18:09 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:18.341 16:18:09 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:18.341 16:18:09 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:18.341 16:18:09 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:18.341 16:18:09 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:18.341 16:18:09 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:18.341 16:18:09 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:18.341 16:18:09 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:18.341 16:18:09 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:18.341 16:18:09 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:18.341 16:18:09 -- common/autotest_common.sh@1541 -- # continue 00:05:18.341 16:18:09 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:18.341 16:18:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:18.341 16:18:09 -- common/autotest_common.sh@10 -- # set +x 00:05:18.341 16:18:09 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:18.341 16:18:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:18.341 16:18:09 -- common/autotest_common.sh@10 -- # set +x 00:05:18.341 16:18:09 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:19.281 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.281 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.281 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.281 16:18:10 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:19.282 16:18:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:19.282 16:18:10 -- common/autotest_common.sh@10 -- # set +x 00:05:19.541 16:18:11 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:19.542 16:18:11 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:19.542 16:18:11 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:19.542 16:18:11 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:19.542 16:18:11 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:19.542 16:18:11 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:19.542 16:18:11 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:19.542 16:18:11 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:19.542 16:18:11 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:19.542 16:18:11 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:19.542 16:18:11 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:19.542 16:18:11 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:19.542 16:18:11 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:19.542 16:18:11 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:19.542 16:18:11 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:19.542 16:18:11 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:19.542 16:18:11 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:19.542 16:18:11 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:19.542 16:18:11 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:19.542 16:18:11 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:19.542 16:18:11 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:19.542 16:18:11 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:19.542 16:18:11 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:19.542 16:18:11 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:19.542 16:18:11 -- common/autotest_common.sh@1570 -- # return 0 00:05:19.542 16:18:11 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:19.542 16:18:11 -- common/autotest_common.sh@1578 -- # return 0 00:05:19.542 16:18:11 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:19.542 16:18:11 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:19.542 16:18:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:19.542 16:18:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:19.542 16:18:11 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:19.542 16:18:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:19.542 16:18:11 -- common/autotest_common.sh@10 -- # set +x 00:05:19.542 16:18:11 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:19.542 16:18:11 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:19.542 16:18:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.542 16:18:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.542 16:18:11 -- common/autotest_common.sh@10 -- # set +x 00:05:19.542 ************************************ 00:05:19.542 START TEST env 00:05:19.542 ************************************ 00:05:19.542 16:18:11 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:19.542 * Looking for test storage... 00:05:19.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:19.802 16:18:11 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:19.802 16:18:11 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:19.802 16:18:11 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:19.802 16:18:11 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:19.802 16:18:11 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.802 16:18:11 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.802 16:18:11 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.802 16:18:11 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.802 16:18:11 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.802 16:18:11 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.802 16:18:11 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.802 16:18:11 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.802 16:18:11 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.802 16:18:11 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.802 16:18:11 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.802 16:18:11 env -- scripts/common.sh@344 -- # case "$op" in 00:05:19.802 16:18:11 env -- scripts/common.sh@345 -- # : 1 00:05:19.802 16:18:11 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.802 16:18:11 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.802 16:18:11 env -- scripts/common.sh@365 -- # decimal 1 00:05:19.802 16:18:11 env -- scripts/common.sh@353 -- # local d=1 00:05:19.802 16:18:11 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.802 16:18:11 env -- scripts/common.sh@355 -- # echo 1 00:05:19.802 16:18:11 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.802 16:18:11 env -- scripts/common.sh@366 -- # decimal 2 00:05:19.802 16:18:11 env -- scripts/common.sh@353 -- # local d=2 00:05:19.802 16:18:11 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.802 16:18:11 env -- scripts/common.sh@355 -- # echo 2 00:05:19.802 16:18:11 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.802 16:18:11 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.802 16:18:11 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.802 16:18:11 env -- scripts/common.sh@368 -- # return 0 00:05:19.802 16:18:11 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.802 16:18:11 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:19.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.802 --rc genhtml_branch_coverage=1 00:05:19.802 --rc genhtml_function_coverage=1 00:05:19.802 --rc genhtml_legend=1 00:05:19.802 --rc geninfo_all_blocks=1 00:05:19.802 --rc geninfo_unexecuted_blocks=1 00:05:19.802 00:05:19.802 ' 00:05:19.802 16:18:11 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:19.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.802 --rc genhtml_branch_coverage=1 00:05:19.802 --rc genhtml_function_coverage=1 00:05:19.802 --rc genhtml_legend=1 00:05:19.802 --rc geninfo_all_blocks=1 00:05:19.802 --rc geninfo_unexecuted_blocks=1 00:05:19.802 00:05:19.802 ' 00:05:19.802 16:18:11 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:19.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.802 --rc genhtml_branch_coverage=1 00:05:19.802 --rc genhtml_function_coverage=1 00:05:19.802 --rc genhtml_legend=1 00:05:19.802 --rc geninfo_all_blocks=1 00:05:19.802 --rc geninfo_unexecuted_blocks=1 00:05:19.802 00:05:19.802 ' 00:05:19.802 16:18:11 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:19.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.802 --rc genhtml_branch_coverage=1 00:05:19.802 --rc genhtml_function_coverage=1 00:05:19.802 --rc genhtml_legend=1 00:05:19.802 --rc geninfo_all_blocks=1 00:05:19.802 --rc geninfo_unexecuted_blocks=1 00:05:19.802 00:05:19.803 ' 00:05:19.803 16:18:11 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:19.803 16:18:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.803 16:18:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.803 16:18:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.803 ************************************ 00:05:19.803 START TEST env_memory 00:05:19.803 ************************************ 00:05:19.803 16:18:11 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:19.803 00:05:19.803 00:05:19.803 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.803 http://cunit.sourceforge.net/ 00:05:19.803 00:05:19.803 00:05:19.803 Suite: memory 00:05:19.803 Test: alloc and free memory map ...[2024-11-28 16:18:11.490292] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:19.803 passed 00:05:19.803 Test: mem map translation ...[2024-11-28 16:18:11.529591] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:19.803 [2024-11-28 16:18:11.529628] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:19.803 [2024-11-28 16:18:11.529678] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:19.803 [2024-11-28 16:18:11.529695] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:20.064 passed 00:05:20.064 Test: mem map registration ...[2024-11-28 16:18:11.589937] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:20.064 [2024-11-28 16:18:11.589971] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:20.064 passed 00:05:20.064 Test: mem map adjacent registrations ...passed 00:05:20.064 00:05:20.064 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.064 suites 1 1 n/a 0 0 00:05:20.064 tests 4 4 4 0 0 00:05:20.064 asserts 152 152 152 0 n/a 00:05:20.064 00:05:20.064 Elapsed time = 0.218 seconds 00:05:20.064 00:05:20.064 real 0m0.270s 00:05:20.064 user 0m0.233s 00:05:20.064 sys 0m0.026s 00:05:20.064 16:18:11 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.064 16:18:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:20.064 ************************************ 00:05:20.064 END TEST env_memory 00:05:20.064 ************************************ 00:05:20.064 16:18:11 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:20.064 16:18:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.064 16:18:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.064 16:18:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.064 ************************************ 00:05:20.064 START TEST env_vtophys 00:05:20.064 ************************************ 00:05:20.064 16:18:11 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:20.064 EAL: lib.eal log level changed from notice to debug 00:05:20.064 EAL: Detected lcore 0 as core 0 on socket 0 00:05:20.064 EAL: Detected lcore 1 as core 0 on socket 0 00:05:20.064 EAL: Detected lcore 2 as core 0 on socket 0 00:05:20.064 EAL: Detected lcore 3 as core 0 on socket 0 00:05:20.064 EAL: Detected lcore 4 as core 0 on socket 0 00:05:20.064 EAL: Detected lcore 5 as core 0 on socket 0 00:05:20.064 EAL: Detected lcore 6 as core 0 on socket 0 00:05:20.064 EAL: Detected lcore 7 as core 0 on socket 0 00:05:20.064 EAL: Detected lcore 8 as core 0 on socket 0 00:05:20.064 EAL: Detected lcore 9 as core 0 on socket 0 00:05:20.064 EAL: Maximum logical cores by configuration: 128 00:05:20.064 EAL: Detected CPU lcores: 10 00:05:20.064 EAL: Detected NUMA nodes: 1 00:05:20.064 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:20.064 EAL: Detected shared linkage of DPDK 00:05:20.064 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:20.064 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:20.064 EAL: Registered [vdev] bus. 00:05:20.064 EAL: bus.vdev log level changed from disabled to notice 00:05:20.064 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:20.064 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:20.064 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:20.064 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:20.064 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:20.064 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:20.064 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:20.064 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:20.064 EAL: No shared files mode enabled, IPC will be disabled 00:05:20.064 EAL: No shared files mode enabled, IPC is disabled 00:05:20.064 EAL: Selected IOVA mode 'PA' 00:05:20.064 EAL: Probing VFIO support... 00:05:20.064 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:20.064 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:20.064 EAL: Ask a virtual area of 0x2e000 bytes 00:05:20.064 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:20.064 EAL: Setting up physically contiguous memory... 00:05:20.064 EAL: Setting maximum number of open files to 524288 00:05:20.064 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:20.064 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:20.064 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.064 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:20.064 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.064 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.064 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:20.065 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:20.065 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.065 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:20.065 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.065 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.065 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:20.065 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:20.065 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.065 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:20.065 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.065 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.065 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:20.065 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:20.065 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.065 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:20.065 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.065 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.065 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:20.065 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:20.065 EAL: Hugepages will be freed exactly as allocated. 00:05:20.065 EAL: No shared files mode enabled, IPC is disabled 00:05:20.065 EAL: No shared files mode enabled, IPC is disabled 00:05:20.325 EAL: TSC frequency is ~2290000 KHz 00:05:20.325 EAL: Main lcore 0 is ready (tid=7f6ccac49a40;cpuset=[0]) 00:05:20.325 EAL: Trying to obtain current memory policy. 00:05:20.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.325 EAL: Restoring previous memory policy: 0 00:05:20.325 EAL: request: mp_malloc_sync 00:05:20.325 EAL: No shared files mode enabled, IPC is disabled 00:05:20.325 EAL: Heap on socket 0 was expanded by 2MB 00:05:20.325 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:20.325 EAL: No shared files mode enabled, IPC is disabled 00:05:20.325 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:20.325 EAL: Mem event callback 'spdk:(nil)' registered 00:05:20.325 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:20.325 00:05:20.325 00:05:20.325 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.325 http://cunit.sourceforge.net/ 00:05:20.325 00:05:20.325 00:05:20.325 Suite: components_suite 00:05:20.585 Test: vtophys_malloc_test ...passed 00:05:20.585 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:20.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.585 EAL: Restoring previous memory policy: 4 00:05:20.585 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.585 EAL: request: mp_malloc_sync 00:05:20.585 EAL: No shared files mode enabled, IPC is disabled 00:05:20.585 EAL: Heap on socket 0 was expanded by 4MB 00:05:20.585 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.585 EAL: request: mp_malloc_sync 00:05:20.585 EAL: No shared files mode enabled, IPC is disabled 00:05:20.585 EAL: Heap on socket 0 was shrunk by 4MB 00:05:20.585 EAL: Trying to obtain current memory policy. 00:05:20.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.585 EAL: Restoring previous memory policy: 4 00:05:20.585 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.585 EAL: request: mp_malloc_sync 00:05:20.585 EAL: No shared files mode enabled, IPC is disabled 00:05:20.585 EAL: Heap on socket 0 was expanded by 6MB 00:05:20.585 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.585 EAL: request: mp_malloc_sync 00:05:20.585 EAL: No shared files mode enabled, IPC is disabled 00:05:20.585 EAL: Heap on socket 0 was shrunk by 6MB 00:05:20.585 EAL: Trying to obtain current memory policy. 00:05:20.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.585 EAL: Restoring previous memory policy: 4 00:05:20.585 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.585 EAL: request: mp_malloc_sync 00:05:20.585 EAL: No shared files mode enabled, IPC is disabled 00:05:20.585 EAL: Heap on socket 0 was expanded by 10MB 00:05:20.585 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.585 EAL: request: mp_malloc_sync 00:05:20.585 EAL: No shared files mode enabled, IPC is disabled 00:05:20.585 EAL: Heap on socket 0 was shrunk by 10MB 00:05:20.585 EAL: Trying to obtain current memory policy. 00:05:20.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.585 EAL: Restoring previous memory policy: 4 00:05:20.585 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.585 EAL: request: mp_malloc_sync 00:05:20.585 EAL: No shared files mode enabled, IPC is disabled 00:05:20.585 EAL: Heap on socket 0 was expanded by 18MB 00:05:20.585 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.585 EAL: request: mp_malloc_sync 00:05:20.585 EAL: No shared files mode enabled, IPC is disabled 00:05:20.585 EAL: Heap on socket 0 was shrunk by 18MB 00:05:20.585 EAL: Trying to obtain current memory policy. 00:05:20.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.585 EAL: Restoring previous memory policy: 4 00:05:20.585 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.585 EAL: request: mp_malloc_sync 00:05:20.585 EAL: No shared files mode enabled, IPC is disabled 00:05:20.585 EAL: Heap on socket 0 was expanded by 34MB 00:05:20.585 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.585 EAL: request: mp_malloc_sync 00:05:20.585 EAL: No shared files mode enabled, IPC is disabled 00:05:20.585 EAL: Heap on socket 0 was shrunk by 34MB 00:05:20.585 EAL: Trying to obtain current memory policy. 00:05:20.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.585 EAL: Restoring previous memory policy: 4 00:05:20.585 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.585 EAL: request: mp_malloc_sync 00:05:20.585 EAL: No shared files mode enabled, IPC is disabled 00:05:20.585 EAL: Heap on socket 0 was expanded by 66MB 00:05:20.585 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.585 EAL: request: mp_malloc_sync 00:05:20.585 EAL: No shared files mode enabled, IPC is disabled 00:05:20.585 EAL: Heap on socket 0 was shrunk by 66MB 00:05:20.585 EAL: Trying to obtain current memory policy. 00:05:20.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.845 EAL: Restoring previous memory policy: 4 00:05:20.845 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.845 EAL: request: mp_malloc_sync 00:05:20.845 EAL: No shared files mode enabled, IPC is disabled 00:05:20.845 EAL: Heap on socket 0 was expanded by 130MB 00:05:20.845 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.845 EAL: request: mp_malloc_sync 00:05:20.845 EAL: No shared files mode enabled, IPC is disabled 00:05:20.845 EAL: Heap on socket 0 was shrunk by 130MB 00:05:20.845 EAL: Trying to obtain current memory policy. 00:05:20.845 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.845 EAL: Restoring previous memory policy: 4 00:05:20.845 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.845 EAL: request: mp_malloc_sync 00:05:20.845 EAL: No shared files mode enabled, IPC is disabled 00:05:20.845 EAL: Heap on socket 0 was expanded by 258MB 00:05:20.845 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.845 EAL: request: mp_malloc_sync 00:05:20.845 EAL: No shared files mode enabled, IPC is disabled 00:05:20.845 EAL: Heap on socket 0 was shrunk by 258MB 00:05:20.845 EAL: Trying to obtain current memory policy. 00:05:20.845 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.105 EAL: Restoring previous memory policy: 4 00:05:21.105 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.105 EAL: request: mp_malloc_sync 00:05:21.105 EAL: No shared files mode enabled, IPC is disabled 00:05:21.105 EAL: Heap on socket 0 was expanded by 514MB 00:05:21.105 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.105 EAL: request: mp_malloc_sync 00:05:21.105 EAL: No shared files mode enabled, IPC is disabled 00:05:21.105 EAL: Heap on socket 0 was shrunk by 514MB 00:05:21.105 EAL: Trying to obtain current memory policy. 00:05:21.105 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.365 EAL: Restoring previous memory policy: 4 00:05:21.365 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.365 EAL: request: mp_malloc_sync 00:05:21.365 EAL: No shared files mode enabled, IPC is disabled 00:05:21.365 EAL: Heap on socket 0 was expanded by 1026MB 00:05:21.625 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.625 passed 00:05:21.625 00:05:21.625 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.625 suites 1 1 n/a 0 0 00:05:21.625 tests 2 2 2 0 0 00:05:21.625 asserts 5365 5365 5365 0 n/a 00:05:21.625 00:05:21.625 Elapsed time = 1.367 seconds 00:05:21.625 EAL: request: mp_malloc_sync 00:05:21.625 EAL: No shared files mode enabled, IPC is disabled 00:05:21.625 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:21.625 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.625 EAL: request: mp_malloc_sync 00:05:21.625 EAL: No shared files mode enabled, IPC is disabled 00:05:21.625 EAL: Heap on socket 0 was shrunk by 2MB 00:05:21.625 EAL: No shared files mode enabled, IPC is disabled 00:05:21.625 EAL: No shared files mode enabled, IPC is disabled 00:05:21.625 EAL: No shared files mode enabled, IPC is disabled 00:05:21.625 00:05:21.625 real 0m1.625s 00:05:21.625 user 0m0.770s 00:05:21.625 sys 0m0.723s 00:05:21.625 16:18:13 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.625 16:18:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:21.625 ************************************ 00:05:21.625 END TEST env_vtophys 00:05:21.625 ************************************ 00:05:21.886 16:18:13 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:21.886 16:18:13 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.886 16:18:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.886 16:18:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.886 ************************************ 00:05:21.886 START TEST env_pci 00:05:21.886 ************************************ 00:05:21.886 16:18:13 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:21.886 00:05:21.886 00:05:21.886 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.886 http://cunit.sourceforge.net/ 00:05:21.886 00:05:21.886 00:05:21.886 Suite: pci 00:05:21.886 Test: pci_hook ...[2024-11-28 16:18:13.497013] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69036 has claimed it 00:05:21.886 passed 00:05:21.886 00:05:21.886 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.886 suites 1 1 n/a 0 0 00:05:21.886 tests 1 1 1 0 0 00:05:21.886 asserts 25 25 25 0 n/a 00:05:21.886 00:05:21.886 Elapsed time = 0.009 seconds 00:05:21.886 EAL: Cannot find device (10000:00:01.0) 00:05:21.886 EAL: Failed to attach device on primary process 00:05:21.886 00:05:21.886 real 0m0.101s 00:05:21.886 user 0m0.046s 00:05:21.886 sys 0m0.054s 00:05:21.886 16:18:13 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.886 16:18:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:21.886 ************************************ 00:05:21.886 END TEST env_pci 00:05:21.886 ************************************ 00:05:21.886 16:18:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:21.886 16:18:13 env -- env/env.sh@15 -- # uname 00:05:21.886 16:18:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:21.886 16:18:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:21.886 16:18:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:21.886 16:18:13 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:21.886 16:18:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.886 16:18:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.886 ************************************ 00:05:21.886 START TEST env_dpdk_post_init 00:05:21.886 ************************************ 00:05:21.886 16:18:13 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:22.146 EAL: Detected CPU lcores: 10 00:05:22.146 EAL: Detected NUMA nodes: 1 00:05:22.146 EAL: Detected shared linkage of DPDK 00:05:22.146 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:22.146 EAL: Selected IOVA mode 'PA' 00:05:22.146 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:22.146 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:22.146 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:22.146 Starting DPDK initialization... 00:05:22.146 Starting SPDK post initialization... 00:05:22.146 SPDK NVMe probe 00:05:22.146 Attaching to 0000:00:10.0 00:05:22.146 Attaching to 0000:00:11.0 00:05:22.146 Attached to 0000:00:10.0 00:05:22.146 Attached to 0000:00:11.0 00:05:22.146 Cleaning up... 00:05:22.146 00:05:22.146 real 0m0.256s 00:05:22.146 user 0m0.070s 00:05:22.146 sys 0m0.086s 00:05:22.146 16:18:13 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.147 16:18:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:22.147 ************************************ 00:05:22.147 END TEST env_dpdk_post_init 00:05:22.147 ************************************ 00:05:22.407 16:18:13 env -- env/env.sh@26 -- # uname 00:05:22.407 16:18:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:22.407 16:18:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:22.407 16:18:13 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.407 16:18:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.407 16:18:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.407 ************************************ 00:05:22.407 START TEST env_mem_callbacks 00:05:22.407 ************************************ 00:05:22.407 16:18:13 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:22.407 EAL: Detected CPU lcores: 10 00:05:22.407 EAL: Detected NUMA nodes: 1 00:05:22.407 EAL: Detected shared linkage of DPDK 00:05:22.407 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:22.407 EAL: Selected IOVA mode 'PA' 00:05:22.407 00:05:22.407 00:05:22.407 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.407 http://cunit.sourceforge.net/ 00:05:22.407 00:05:22.407 00:05:22.407 Suite: memory 00:05:22.407 Test: test ... 00:05:22.407 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:22.407 register 0x200000200000 2097152 00:05:22.407 malloc 3145728 00:05:22.407 register 0x200000400000 4194304 00:05:22.407 buf 0x200000500000 len 3145728 PASSED 00:05:22.407 malloc 64 00:05:22.407 buf 0x2000004fff40 len 64 PASSED 00:05:22.407 malloc 4194304 00:05:22.407 register 0x200000800000 6291456 00:05:22.407 buf 0x200000a00000 len 4194304 PASSED 00:05:22.407 free 0x200000500000 3145728 00:05:22.407 free 0x2000004fff40 64 00:05:22.407 unregister 0x200000400000 4194304 PASSED 00:05:22.407 free 0x200000a00000 4194304 00:05:22.407 unregister 0x200000800000 6291456 PASSED 00:05:22.407 malloc 8388608 00:05:22.407 register 0x200000400000 10485760 00:05:22.407 buf 0x200000600000 len 8388608 PASSED 00:05:22.407 free 0x200000600000 8388608 00:05:22.407 unregister 0x200000400000 10485760 PASSED 00:05:22.407 passed 00:05:22.407 00:05:22.407 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.407 suites 1 1 n/a 0 0 00:05:22.407 tests 1 1 1 0 0 00:05:22.407 asserts 15 15 15 0 n/a 00:05:22.407 00:05:22.407 Elapsed time = 0.012 seconds 00:05:22.667 00:05:22.667 real 0m0.206s 00:05:22.667 user 0m0.031s 00:05:22.667 sys 0m0.071s 00:05:22.667 16:18:14 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.667 16:18:14 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:22.667 ************************************ 00:05:22.667 END TEST env_mem_callbacks 00:05:22.667 ************************************ 00:05:22.667 00:05:22.667 real 0m3.054s 00:05:22.667 user 0m1.360s 00:05:22.667 sys 0m1.357s 00:05:22.667 16:18:14 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.667 ************************************ 00:05:22.667 END TEST env 00:05:22.667 16:18:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.667 ************************************ 00:05:22.667 16:18:14 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:22.667 16:18:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.667 16:18:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.667 16:18:14 -- common/autotest_common.sh@10 -- # set +x 00:05:22.667 ************************************ 00:05:22.667 START TEST rpc 00:05:22.667 ************************************ 00:05:22.667 16:18:14 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:22.667 * Looking for test storage... 00:05:22.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:22.667 16:18:14 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:22.667 16:18:14 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:22.667 16:18:14 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:22.927 16:18:14 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:22.927 16:18:14 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.927 16:18:14 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.927 16:18:14 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.927 16:18:14 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.927 16:18:14 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.927 16:18:14 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.927 16:18:14 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.927 16:18:14 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.927 16:18:14 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.927 16:18:14 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.927 16:18:14 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.927 16:18:14 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:22.927 16:18:14 rpc -- scripts/common.sh@345 -- # : 1 00:05:22.927 16:18:14 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.927 16:18:14 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.927 16:18:14 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:22.927 16:18:14 rpc -- scripts/common.sh@353 -- # local d=1 00:05:22.927 16:18:14 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.927 16:18:14 rpc -- scripts/common.sh@355 -- # echo 1 00:05:22.927 16:18:14 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.927 16:18:14 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:22.927 16:18:14 rpc -- scripts/common.sh@353 -- # local d=2 00:05:22.927 16:18:14 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.927 16:18:14 rpc -- scripts/common.sh@355 -- # echo 2 00:05:22.927 16:18:14 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.928 16:18:14 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.928 16:18:14 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.928 16:18:14 rpc -- scripts/common.sh@368 -- # return 0 00:05:22.928 16:18:14 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.928 16:18:14 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:22.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.928 --rc genhtml_branch_coverage=1 00:05:22.928 --rc genhtml_function_coverage=1 00:05:22.928 --rc genhtml_legend=1 00:05:22.928 --rc geninfo_all_blocks=1 00:05:22.928 --rc geninfo_unexecuted_blocks=1 00:05:22.928 00:05:22.928 ' 00:05:22.928 16:18:14 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:22.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.928 --rc genhtml_branch_coverage=1 00:05:22.928 --rc genhtml_function_coverage=1 00:05:22.928 --rc genhtml_legend=1 00:05:22.928 --rc geninfo_all_blocks=1 00:05:22.928 --rc geninfo_unexecuted_blocks=1 00:05:22.928 00:05:22.928 ' 00:05:22.928 16:18:14 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:22.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.928 --rc genhtml_branch_coverage=1 00:05:22.928 --rc genhtml_function_coverage=1 00:05:22.928 --rc genhtml_legend=1 00:05:22.928 --rc geninfo_all_blocks=1 00:05:22.928 --rc geninfo_unexecuted_blocks=1 00:05:22.928 00:05:22.928 ' 00:05:22.928 16:18:14 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:22.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.928 --rc genhtml_branch_coverage=1 00:05:22.928 --rc genhtml_function_coverage=1 00:05:22.928 --rc genhtml_legend=1 00:05:22.928 --rc geninfo_all_blocks=1 00:05:22.928 --rc geninfo_unexecuted_blocks=1 00:05:22.928 00:05:22.928 ' 00:05:22.928 16:18:14 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:22.928 16:18:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69164 00:05:22.928 16:18:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.928 16:18:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69164 00:05:22.928 16:18:14 rpc -- common/autotest_common.sh@831 -- # '[' -z 69164 ']' 00:05:22.928 16:18:14 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.928 16:18:14 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.928 16:18:14 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.928 16:18:14 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.928 16:18:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.928 [2024-11-28 16:18:14.608418] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:22.928 [2024-11-28 16:18:14.608539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69164 ] 00:05:23.187 [2024-11-28 16:18:14.770261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.187 [2024-11-28 16:18:14.815297] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:23.187 [2024-11-28 16:18:14.815370] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69164' to capture a snapshot of events at runtime. 00:05:23.187 [2024-11-28 16:18:14.815383] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:23.187 [2024-11-28 16:18:14.815392] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:23.187 [2024-11-28 16:18:14.815405] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69164 for offline analysis/debug. 00:05:23.188 [2024-11-28 16:18:14.815451] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.756 16:18:15 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.756 16:18:15 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:23.756 16:18:15 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:23.756 16:18:15 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:23.756 16:18:15 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:23.756 16:18:15 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:23.756 16:18:15 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.756 16:18:15 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.756 16:18:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.756 ************************************ 00:05:23.756 START TEST rpc_integrity 00:05:23.756 ************************************ 00:05:23.756 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:23.756 16:18:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:23.756 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.756 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.756 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.756 16:18:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:23.756 16:18:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:24.016 16:18:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:24.016 16:18:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:24.016 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.016 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.016 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.016 16:18:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:24.016 16:18:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:24.016 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.016 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.016 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.016 16:18:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:24.016 { 00:05:24.016 "name": "Malloc0", 00:05:24.016 "aliases": [ 00:05:24.016 "3c2dc16f-0c1e-4ffb-b9ae-e0ec78fc8069" 00:05:24.016 ], 00:05:24.016 "product_name": "Malloc disk", 00:05:24.016 "block_size": 512, 00:05:24.016 "num_blocks": 16384, 00:05:24.016 "uuid": "3c2dc16f-0c1e-4ffb-b9ae-e0ec78fc8069", 00:05:24.016 "assigned_rate_limits": { 00:05:24.016 "rw_ios_per_sec": 0, 00:05:24.016 "rw_mbytes_per_sec": 0, 00:05:24.016 "r_mbytes_per_sec": 0, 00:05:24.016 "w_mbytes_per_sec": 0 00:05:24.016 }, 00:05:24.016 "claimed": false, 00:05:24.016 "zoned": false, 00:05:24.016 "supported_io_types": { 00:05:24.016 "read": true, 00:05:24.016 "write": true, 00:05:24.016 "unmap": true, 00:05:24.016 "flush": true, 00:05:24.016 "reset": true, 00:05:24.016 "nvme_admin": false, 00:05:24.016 "nvme_io": false, 00:05:24.016 "nvme_io_md": false, 00:05:24.016 "write_zeroes": true, 00:05:24.016 "zcopy": true, 00:05:24.016 "get_zone_info": false, 00:05:24.016 "zone_management": false, 00:05:24.016 "zone_append": false, 00:05:24.016 "compare": false, 00:05:24.016 "compare_and_write": false, 00:05:24.016 "abort": true, 00:05:24.016 "seek_hole": false, 00:05:24.016 "seek_data": false, 00:05:24.016 "copy": true, 00:05:24.016 "nvme_iov_md": false 00:05:24.016 }, 00:05:24.016 "memory_domains": [ 00:05:24.016 { 00:05:24.016 "dma_device_id": "system", 00:05:24.016 "dma_device_type": 1 00:05:24.016 }, 00:05:24.016 { 00:05:24.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.016 "dma_device_type": 2 00:05:24.016 } 00:05:24.016 ], 00:05:24.016 "driver_specific": {} 00:05:24.016 } 00:05:24.016 ]' 00:05:24.016 16:18:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:24.016 16:18:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:24.016 16:18:15 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:24.016 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.016 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.016 [2024-11-28 16:18:15.634129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:24.016 [2024-11-28 16:18:15.634222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:24.016 [2024-11-28 16:18:15.634253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:24.016 [2024-11-28 16:18:15.634263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:24.016 [2024-11-28 16:18:15.636556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:24.016 [2024-11-28 16:18:15.636597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:24.016 Passthru0 00:05:24.016 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.016 16:18:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:24.016 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.016 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.016 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.016 16:18:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:24.016 { 00:05:24.016 "name": "Malloc0", 00:05:24.016 "aliases": [ 00:05:24.016 "3c2dc16f-0c1e-4ffb-b9ae-e0ec78fc8069" 00:05:24.016 ], 00:05:24.016 "product_name": "Malloc disk", 00:05:24.016 "block_size": 512, 00:05:24.016 "num_blocks": 16384, 00:05:24.016 "uuid": "3c2dc16f-0c1e-4ffb-b9ae-e0ec78fc8069", 00:05:24.016 "assigned_rate_limits": { 00:05:24.016 "rw_ios_per_sec": 0, 00:05:24.016 "rw_mbytes_per_sec": 0, 00:05:24.016 "r_mbytes_per_sec": 0, 00:05:24.016 "w_mbytes_per_sec": 0 00:05:24.016 }, 00:05:24.016 "claimed": true, 00:05:24.016 "claim_type": "exclusive_write", 00:05:24.016 "zoned": false, 00:05:24.017 "supported_io_types": { 00:05:24.017 "read": true, 00:05:24.017 "write": true, 00:05:24.017 "unmap": true, 00:05:24.017 "flush": true, 00:05:24.017 "reset": true, 00:05:24.017 "nvme_admin": false, 00:05:24.017 "nvme_io": false, 00:05:24.017 "nvme_io_md": false, 00:05:24.017 "write_zeroes": true, 00:05:24.017 "zcopy": true, 00:05:24.017 "get_zone_info": false, 00:05:24.017 "zone_management": false, 00:05:24.017 "zone_append": false, 00:05:24.017 "compare": false, 00:05:24.017 "compare_and_write": false, 00:05:24.017 "abort": true, 00:05:24.017 "seek_hole": false, 00:05:24.017 "seek_data": false, 00:05:24.017 "copy": true, 00:05:24.017 "nvme_iov_md": false 00:05:24.017 }, 00:05:24.017 "memory_domains": [ 00:05:24.017 { 00:05:24.017 "dma_device_id": "system", 00:05:24.017 "dma_device_type": 1 00:05:24.017 }, 00:05:24.017 { 00:05:24.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.017 "dma_device_type": 2 00:05:24.017 } 00:05:24.017 ], 00:05:24.017 "driver_specific": {} 00:05:24.017 }, 00:05:24.017 { 00:05:24.017 "name": "Passthru0", 00:05:24.017 "aliases": [ 00:05:24.017 "e769e060-1463-5619-8d9c-0b514f6687eb" 00:05:24.017 ], 00:05:24.017 "product_name": "passthru", 00:05:24.017 "block_size": 512, 00:05:24.017 "num_blocks": 16384, 00:05:24.017 "uuid": "e769e060-1463-5619-8d9c-0b514f6687eb", 00:05:24.017 "assigned_rate_limits": { 00:05:24.017 "rw_ios_per_sec": 0, 00:05:24.017 "rw_mbytes_per_sec": 0, 00:05:24.017 "r_mbytes_per_sec": 0, 00:05:24.017 "w_mbytes_per_sec": 0 00:05:24.017 }, 00:05:24.017 "claimed": false, 00:05:24.017 "zoned": false, 00:05:24.017 "supported_io_types": { 00:05:24.017 "read": true, 00:05:24.017 "write": true, 00:05:24.017 "unmap": true, 00:05:24.017 "flush": true, 00:05:24.017 "reset": true, 00:05:24.017 "nvme_admin": false, 00:05:24.017 "nvme_io": false, 00:05:24.017 "nvme_io_md": false, 00:05:24.017 "write_zeroes": true, 00:05:24.017 "zcopy": true, 00:05:24.017 "get_zone_info": false, 00:05:24.017 "zone_management": false, 00:05:24.017 "zone_append": false, 00:05:24.017 "compare": false, 00:05:24.017 "compare_and_write": false, 00:05:24.017 "abort": true, 00:05:24.017 "seek_hole": false, 00:05:24.017 "seek_data": false, 00:05:24.017 "copy": true, 00:05:24.017 "nvme_iov_md": false 00:05:24.017 }, 00:05:24.017 "memory_domains": [ 00:05:24.017 { 00:05:24.017 "dma_device_id": "system", 00:05:24.017 "dma_device_type": 1 00:05:24.017 }, 00:05:24.017 { 00:05:24.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.017 "dma_device_type": 2 00:05:24.017 } 00:05:24.017 ], 00:05:24.017 "driver_specific": { 00:05:24.017 "passthru": { 00:05:24.017 "name": "Passthru0", 00:05:24.017 "base_bdev_name": "Malloc0" 00:05:24.017 } 00:05:24.017 } 00:05:24.017 } 00:05:24.017 ]' 00:05:24.017 16:18:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:24.017 16:18:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:24.017 16:18:15 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:24.017 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.017 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.017 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.017 16:18:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:24.017 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.017 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.017 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.017 16:18:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:24.017 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.017 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.017 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.017 16:18:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:24.017 16:18:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:24.017 16:18:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:24.017 00:05:24.017 real 0m0.292s 00:05:24.017 user 0m0.168s 00:05:24.017 sys 0m0.048s 00:05:24.017 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.017 16:18:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.017 ************************************ 00:05:24.017 END TEST rpc_integrity 00:05:24.017 ************************************ 00:05:24.277 16:18:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:24.277 16:18:15 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.277 16:18:15 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.277 16:18:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.277 ************************************ 00:05:24.277 START TEST rpc_plugins 00:05:24.277 ************************************ 00:05:24.277 16:18:15 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:24.277 16:18:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:24.277 16:18:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.277 16:18:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:24.277 16:18:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.277 16:18:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:24.277 16:18:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:24.277 16:18:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.277 16:18:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:24.277 16:18:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.277 16:18:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:24.277 { 00:05:24.277 "name": "Malloc1", 00:05:24.277 "aliases": [ 00:05:24.277 "ea7e5a0b-32bc-4a8e-9e41-c940279b51b1" 00:05:24.277 ], 00:05:24.277 "product_name": "Malloc disk", 00:05:24.277 "block_size": 4096, 00:05:24.277 "num_blocks": 256, 00:05:24.277 "uuid": "ea7e5a0b-32bc-4a8e-9e41-c940279b51b1", 00:05:24.277 "assigned_rate_limits": { 00:05:24.277 "rw_ios_per_sec": 0, 00:05:24.277 "rw_mbytes_per_sec": 0, 00:05:24.277 "r_mbytes_per_sec": 0, 00:05:24.277 "w_mbytes_per_sec": 0 00:05:24.277 }, 00:05:24.277 "claimed": false, 00:05:24.277 "zoned": false, 00:05:24.277 "supported_io_types": { 00:05:24.277 "read": true, 00:05:24.277 "write": true, 00:05:24.277 "unmap": true, 00:05:24.277 "flush": true, 00:05:24.277 "reset": true, 00:05:24.277 "nvme_admin": false, 00:05:24.277 "nvme_io": false, 00:05:24.277 "nvme_io_md": false, 00:05:24.277 "write_zeroes": true, 00:05:24.277 "zcopy": true, 00:05:24.277 "get_zone_info": false, 00:05:24.277 "zone_management": false, 00:05:24.278 "zone_append": false, 00:05:24.278 "compare": false, 00:05:24.278 "compare_and_write": false, 00:05:24.278 "abort": true, 00:05:24.278 "seek_hole": false, 00:05:24.278 "seek_data": false, 00:05:24.278 "copy": true, 00:05:24.278 "nvme_iov_md": false 00:05:24.278 }, 00:05:24.278 "memory_domains": [ 00:05:24.278 { 00:05:24.278 "dma_device_id": "system", 00:05:24.278 "dma_device_type": 1 00:05:24.278 }, 00:05:24.278 { 00:05:24.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.278 "dma_device_type": 2 00:05:24.278 } 00:05:24.278 ], 00:05:24.278 "driver_specific": {} 00:05:24.278 } 00:05:24.278 ]' 00:05:24.278 16:18:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:24.278 16:18:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:24.278 16:18:15 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:24.278 16:18:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.278 16:18:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:24.278 16:18:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.278 16:18:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:24.278 16:18:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.278 16:18:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:24.278 16:18:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.278 16:18:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:24.278 16:18:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:24.278 16:18:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:24.278 00:05:24.278 real 0m0.161s 00:05:24.278 user 0m0.095s 00:05:24.278 sys 0m0.028s 00:05:24.278 16:18:16 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.278 16:18:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:24.278 ************************************ 00:05:24.278 END TEST rpc_plugins 00:05:24.278 ************************************ 00:05:24.538 16:18:16 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:24.538 16:18:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.538 16:18:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.538 16:18:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.538 ************************************ 00:05:24.538 START TEST rpc_trace_cmd_test 00:05:24.538 ************************************ 00:05:24.538 16:18:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:24.538 16:18:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:24.538 16:18:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:24.538 16:18:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.538 16:18:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:24.538 16:18:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.538 16:18:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:24.538 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69164", 00:05:24.538 "tpoint_group_mask": "0x8", 00:05:24.538 "iscsi_conn": { 00:05:24.538 "mask": "0x2", 00:05:24.538 "tpoint_mask": "0x0" 00:05:24.538 }, 00:05:24.538 "scsi": { 00:05:24.538 "mask": "0x4", 00:05:24.538 "tpoint_mask": "0x0" 00:05:24.538 }, 00:05:24.538 "bdev": { 00:05:24.538 "mask": "0x8", 00:05:24.538 "tpoint_mask": "0xffffffffffffffff" 00:05:24.538 }, 00:05:24.538 "nvmf_rdma": { 00:05:24.538 "mask": "0x10", 00:05:24.538 "tpoint_mask": "0x0" 00:05:24.538 }, 00:05:24.538 "nvmf_tcp": { 00:05:24.538 "mask": "0x20", 00:05:24.538 "tpoint_mask": "0x0" 00:05:24.538 }, 00:05:24.538 "ftl": { 00:05:24.538 "mask": "0x40", 00:05:24.538 "tpoint_mask": "0x0" 00:05:24.538 }, 00:05:24.538 "blobfs": { 00:05:24.538 "mask": "0x80", 00:05:24.538 "tpoint_mask": "0x0" 00:05:24.538 }, 00:05:24.538 "dsa": { 00:05:24.538 "mask": "0x200", 00:05:24.538 "tpoint_mask": "0x0" 00:05:24.538 }, 00:05:24.538 "thread": { 00:05:24.538 "mask": "0x400", 00:05:24.538 "tpoint_mask": "0x0" 00:05:24.538 }, 00:05:24.538 "nvme_pcie": { 00:05:24.538 "mask": "0x800", 00:05:24.538 "tpoint_mask": "0x0" 00:05:24.538 }, 00:05:24.538 "iaa": { 00:05:24.538 "mask": "0x1000", 00:05:24.538 "tpoint_mask": "0x0" 00:05:24.538 }, 00:05:24.538 "nvme_tcp": { 00:05:24.538 "mask": "0x2000", 00:05:24.538 "tpoint_mask": "0x0" 00:05:24.538 }, 00:05:24.538 "bdev_nvme": { 00:05:24.538 "mask": "0x4000", 00:05:24.538 "tpoint_mask": "0x0" 00:05:24.538 }, 00:05:24.538 "sock": { 00:05:24.538 "mask": "0x8000", 00:05:24.538 "tpoint_mask": "0x0" 00:05:24.538 }, 00:05:24.538 "blob": { 00:05:24.538 "mask": "0x10000", 00:05:24.538 "tpoint_mask": "0x0" 00:05:24.538 }, 00:05:24.538 "bdev_raid": { 00:05:24.538 "mask": "0x20000", 00:05:24.538 "tpoint_mask": "0x0" 00:05:24.538 } 00:05:24.538 }' 00:05:24.538 16:18:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:24.538 16:18:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:24.538 16:18:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:24.538 16:18:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:24.538 16:18:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:24.538 16:18:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:24.538 16:18:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:24.538 16:18:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:24.538 16:18:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:24.798 16:18:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:24.799 00:05:24.799 real 0m0.236s 00:05:24.799 user 0m0.187s 00:05:24.799 sys 0m0.037s 00:05:24.799 16:18:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.799 16:18:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:24.799 ************************************ 00:05:24.799 END TEST rpc_trace_cmd_test 00:05:24.799 ************************************ 00:05:24.799 16:18:16 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:24.799 16:18:16 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:24.799 16:18:16 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:24.799 16:18:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.799 16:18:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.799 16:18:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.799 ************************************ 00:05:24.799 START TEST rpc_daemon_integrity 00:05:24.799 ************************************ 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:24.799 { 00:05:24.799 "name": "Malloc2", 00:05:24.799 "aliases": [ 00:05:24.799 "a856c87e-4367-4547-b4e7-b39e7fdf486e" 00:05:24.799 ], 00:05:24.799 "product_name": "Malloc disk", 00:05:24.799 "block_size": 512, 00:05:24.799 "num_blocks": 16384, 00:05:24.799 "uuid": "a856c87e-4367-4547-b4e7-b39e7fdf486e", 00:05:24.799 "assigned_rate_limits": { 00:05:24.799 "rw_ios_per_sec": 0, 00:05:24.799 "rw_mbytes_per_sec": 0, 00:05:24.799 "r_mbytes_per_sec": 0, 00:05:24.799 "w_mbytes_per_sec": 0 00:05:24.799 }, 00:05:24.799 "claimed": false, 00:05:24.799 "zoned": false, 00:05:24.799 "supported_io_types": { 00:05:24.799 "read": true, 00:05:24.799 "write": true, 00:05:24.799 "unmap": true, 00:05:24.799 "flush": true, 00:05:24.799 "reset": true, 00:05:24.799 "nvme_admin": false, 00:05:24.799 "nvme_io": false, 00:05:24.799 "nvme_io_md": false, 00:05:24.799 "write_zeroes": true, 00:05:24.799 "zcopy": true, 00:05:24.799 "get_zone_info": false, 00:05:24.799 "zone_management": false, 00:05:24.799 "zone_append": false, 00:05:24.799 "compare": false, 00:05:24.799 "compare_and_write": false, 00:05:24.799 "abort": true, 00:05:24.799 "seek_hole": false, 00:05:24.799 "seek_data": false, 00:05:24.799 "copy": true, 00:05:24.799 "nvme_iov_md": false 00:05:24.799 }, 00:05:24.799 "memory_domains": [ 00:05:24.799 { 00:05:24.799 "dma_device_id": "system", 00:05:24.799 "dma_device_type": 1 00:05:24.799 }, 00:05:24.799 { 00:05:24.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.799 "dma_device_type": 2 00:05:24.799 } 00:05:24.799 ], 00:05:24.799 "driver_specific": {} 00:05:24.799 } 00:05:24.799 ]' 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.799 [2024-11-28 16:18:16.529024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:24.799 [2024-11-28 16:18:16.529079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:24.799 [2024-11-28 16:18:16.529116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:24.799 [2024-11-28 16:18:16.529125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:24.799 [2024-11-28 16:18:16.531233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:24.799 [2024-11-28 16:18:16.531270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:24.799 Passthru0 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.799 16:18:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:24.799 { 00:05:24.799 "name": "Malloc2", 00:05:24.799 "aliases": [ 00:05:24.799 "a856c87e-4367-4547-b4e7-b39e7fdf486e" 00:05:24.799 ], 00:05:24.799 "product_name": "Malloc disk", 00:05:24.799 "block_size": 512, 00:05:24.799 "num_blocks": 16384, 00:05:24.799 "uuid": "a856c87e-4367-4547-b4e7-b39e7fdf486e", 00:05:24.799 "assigned_rate_limits": { 00:05:24.799 "rw_ios_per_sec": 0, 00:05:24.799 "rw_mbytes_per_sec": 0, 00:05:24.799 "r_mbytes_per_sec": 0, 00:05:24.799 "w_mbytes_per_sec": 0 00:05:24.799 }, 00:05:24.799 "claimed": true, 00:05:24.799 "claim_type": "exclusive_write", 00:05:24.799 "zoned": false, 00:05:24.799 "supported_io_types": { 00:05:24.799 "read": true, 00:05:24.799 "write": true, 00:05:24.799 "unmap": true, 00:05:24.799 "flush": true, 00:05:24.799 "reset": true, 00:05:24.799 "nvme_admin": false, 00:05:24.799 "nvme_io": false, 00:05:24.799 "nvme_io_md": false, 00:05:24.799 "write_zeroes": true, 00:05:24.799 "zcopy": true, 00:05:24.799 "get_zone_info": false, 00:05:24.799 "zone_management": false, 00:05:24.799 "zone_append": false, 00:05:24.799 "compare": false, 00:05:24.799 "compare_and_write": false, 00:05:24.799 "abort": true, 00:05:24.799 "seek_hole": false, 00:05:24.799 "seek_data": false, 00:05:24.799 "copy": true, 00:05:24.799 "nvme_iov_md": false 00:05:24.799 }, 00:05:24.799 "memory_domains": [ 00:05:24.799 { 00:05:24.799 "dma_device_id": "system", 00:05:24.799 "dma_device_type": 1 00:05:24.799 }, 00:05:24.799 { 00:05:24.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.799 "dma_device_type": 2 00:05:24.799 } 00:05:24.799 ], 00:05:24.799 "driver_specific": {} 00:05:24.799 }, 00:05:24.799 { 00:05:24.799 "name": "Passthru0", 00:05:24.799 "aliases": [ 00:05:24.799 "94e6d4d1-442c-5ef8-a281-e2a7c9a32cc2" 00:05:24.799 ], 00:05:24.799 "product_name": "passthru", 00:05:24.799 "block_size": 512, 00:05:24.799 "num_blocks": 16384, 00:05:24.799 "uuid": "94e6d4d1-442c-5ef8-a281-e2a7c9a32cc2", 00:05:24.799 "assigned_rate_limits": { 00:05:24.799 "rw_ios_per_sec": 0, 00:05:24.799 "rw_mbytes_per_sec": 0, 00:05:24.799 "r_mbytes_per_sec": 0, 00:05:24.799 "w_mbytes_per_sec": 0 00:05:24.799 }, 00:05:24.799 "claimed": false, 00:05:24.799 "zoned": false, 00:05:24.799 "supported_io_types": { 00:05:24.799 "read": true, 00:05:24.799 "write": true, 00:05:24.799 "unmap": true, 00:05:24.799 "flush": true, 00:05:24.799 "reset": true, 00:05:24.799 "nvme_admin": false, 00:05:24.799 "nvme_io": false, 00:05:24.799 "nvme_io_md": false, 00:05:24.799 "write_zeroes": true, 00:05:24.799 "zcopy": true, 00:05:24.799 "get_zone_info": false, 00:05:24.799 "zone_management": false, 00:05:24.799 "zone_append": false, 00:05:24.799 "compare": false, 00:05:24.799 "compare_and_write": false, 00:05:24.799 "abort": true, 00:05:24.799 "seek_hole": false, 00:05:24.799 "seek_data": false, 00:05:24.799 "copy": true, 00:05:24.799 "nvme_iov_md": false 00:05:24.799 }, 00:05:24.799 "memory_domains": [ 00:05:24.799 { 00:05:24.799 "dma_device_id": "system", 00:05:24.799 "dma_device_type": 1 00:05:24.799 }, 00:05:24.799 { 00:05:24.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.799 "dma_device_type": 2 00:05:24.799 } 00:05:24.799 ], 00:05:24.799 "driver_specific": { 00:05:24.799 "passthru": { 00:05:24.799 "name": "Passthru0", 00:05:24.799 "base_bdev_name": "Malloc2" 00:05:24.799 } 00:05:24.799 } 00:05:24.799 } 00:05:24.799 ]' 00:05:25.060 16:18:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:25.060 16:18:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:25.060 16:18:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:25.060 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.060 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.060 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.060 16:18:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:25.060 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.060 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.060 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.060 16:18:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:25.060 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.060 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.060 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.060 16:18:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:25.060 16:18:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:25.060 16:18:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:25.060 00:05:25.060 real 0m0.316s 00:05:25.060 user 0m0.186s 00:05:25.060 sys 0m0.056s 00:05:25.060 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.060 16:18:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.060 ************************************ 00:05:25.060 END TEST rpc_daemon_integrity 00:05:25.060 ************************************ 00:05:25.060 16:18:16 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:25.060 16:18:16 rpc -- rpc/rpc.sh@84 -- # killprocess 69164 00:05:25.060 16:18:16 rpc -- common/autotest_common.sh@950 -- # '[' -z 69164 ']' 00:05:25.060 16:18:16 rpc -- common/autotest_common.sh@954 -- # kill -0 69164 00:05:25.060 16:18:16 rpc -- common/autotest_common.sh@955 -- # uname 00:05:25.060 16:18:16 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.060 16:18:16 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69164 00:05:25.060 16:18:16 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.060 16:18:16 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.060 killing process with pid 69164 00:05:25.060 16:18:16 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69164' 00:05:25.060 16:18:16 rpc -- common/autotest_common.sh@969 -- # kill 69164 00:05:25.060 16:18:16 rpc -- common/autotest_common.sh@974 -- # wait 69164 00:05:25.630 00:05:25.630 real 0m2.878s 00:05:25.630 user 0m3.457s 00:05:25.630 sys 0m0.855s 00:05:25.630 16:18:17 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.630 16:18:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.630 ************************************ 00:05:25.630 END TEST rpc 00:05:25.630 ************************************ 00:05:25.630 16:18:17 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:25.630 16:18:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.630 16:18:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.630 16:18:17 -- common/autotest_common.sh@10 -- # set +x 00:05:25.630 ************************************ 00:05:25.630 START TEST skip_rpc 00:05:25.630 ************************************ 00:05:25.630 16:18:17 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:25.630 * Looking for test storage... 00:05:25.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:25.630 16:18:17 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:25.630 16:18:17 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:25.630 16:18:17 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:25.891 16:18:17 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.891 16:18:17 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:25.891 16:18:17 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.891 16:18:17 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:25.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.891 --rc genhtml_branch_coverage=1 00:05:25.891 --rc genhtml_function_coverage=1 00:05:25.891 --rc genhtml_legend=1 00:05:25.891 --rc geninfo_all_blocks=1 00:05:25.891 --rc geninfo_unexecuted_blocks=1 00:05:25.891 00:05:25.891 ' 00:05:25.891 16:18:17 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:25.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.891 --rc genhtml_branch_coverage=1 00:05:25.891 --rc genhtml_function_coverage=1 00:05:25.891 --rc genhtml_legend=1 00:05:25.891 --rc geninfo_all_blocks=1 00:05:25.891 --rc geninfo_unexecuted_blocks=1 00:05:25.891 00:05:25.891 ' 00:05:25.891 16:18:17 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:25.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.891 --rc genhtml_branch_coverage=1 00:05:25.891 --rc genhtml_function_coverage=1 00:05:25.891 --rc genhtml_legend=1 00:05:25.891 --rc geninfo_all_blocks=1 00:05:25.891 --rc geninfo_unexecuted_blocks=1 00:05:25.891 00:05:25.891 ' 00:05:25.891 16:18:17 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:25.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.891 --rc genhtml_branch_coverage=1 00:05:25.891 --rc genhtml_function_coverage=1 00:05:25.891 --rc genhtml_legend=1 00:05:25.891 --rc geninfo_all_blocks=1 00:05:25.891 --rc geninfo_unexecuted_blocks=1 00:05:25.891 00:05:25.891 ' 00:05:25.891 16:18:17 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:25.891 16:18:17 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:25.891 16:18:17 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:25.891 16:18:17 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.891 16:18:17 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.891 16:18:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.891 ************************************ 00:05:25.891 START TEST skip_rpc 00:05:25.891 ************************************ 00:05:25.891 16:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:25.891 16:18:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69366 00:05:25.891 16:18:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:25.892 16:18:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.892 16:18:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:25.892 [2024-11-28 16:18:17.575388] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:25.892 [2024-11-28 16:18:17.575501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69366 ] 00:05:26.151 [2024-11-28 16:18:17.735421] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.151 [2024-11-28 16:18:17.779642] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69366 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69366 ']' 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69366 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69366 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:31.431 killing process with pid 69366 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69366' 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69366 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69366 00:05:31.431 00:05:31.431 real 0m5.450s 00:05:31.431 user 0m5.030s 00:05:31.431 sys 0m0.342s 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.431 16:18:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.431 ************************************ 00:05:31.431 END TEST skip_rpc 00:05:31.431 ************************************ 00:05:31.431 16:18:22 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:31.431 16:18:22 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.431 16:18:22 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.431 16:18:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.431 ************************************ 00:05:31.431 START TEST skip_rpc_with_json 00:05:31.431 ************************************ 00:05:31.431 16:18:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:31.431 16:18:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:31.431 16:18:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69453 00:05:31.431 16:18:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.431 16:18:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.431 16:18:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69453 00:05:31.431 16:18:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69453 ']' 00:05:31.431 16:18:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.431 16:18:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.432 16:18:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.432 16:18:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.432 16:18:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:31.432 [2024-11-28 16:18:23.103087] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:31.432 [2024-11-28 16:18:23.103231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69453 ] 00:05:31.690 [2024-11-28 16:18:23.269364] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.690 [2024-11-28 16:18:23.313713] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.258 16:18:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.258 16:18:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:32.258 16:18:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:32.258 16:18:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.258 16:18:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.258 [2024-11-28 16:18:23.893922] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:32.258 request: 00:05:32.258 { 00:05:32.258 "trtype": "tcp", 00:05:32.258 "method": "nvmf_get_transports", 00:05:32.258 "req_id": 1 00:05:32.258 } 00:05:32.258 Got JSON-RPC error response 00:05:32.258 response: 00:05:32.258 { 00:05:32.258 "code": -19, 00:05:32.258 "message": "No such device" 00:05:32.258 } 00:05:32.258 16:18:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:32.258 16:18:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:32.258 16:18:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.258 16:18:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.258 [2024-11-28 16:18:23.906033] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:32.258 16:18:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.258 16:18:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:32.258 16:18:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.258 16:18:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.517 16:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.517 16:18:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:32.517 { 00:05:32.517 "subsystems": [ 00:05:32.517 { 00:05:32.517 "subsystem": "fsdev", 00:05:32.517 "config": [ 00:05:32.517 { 00:05:32.517 "method": "fsdev_set_opts", 00:05:32.517 "params": { 00:05:32.517 "fsdev_io_pool_size": 65535, 00:05:32.517 "fsdev_io_cache_size": 256 00:05:32.517 } 00:05:32.517 } 00:05:32.517 ] 00:05:32.517 }, 00:05:32.517 { 00:05:32.517 "subsystem": "keyring", 00:05:32.517 "config": [] 00:05:32.517 }, 00:05:32.517 { 00:05:32.517 "subsystem": "iobuf", 00:05:32.517 "config": [ 00:05:32.517 { 00:05:32.517 "method": "iobuf_set_options", 00:05:32.517 "params": { 00:05:32.517 "small_pool_count": 8192, 00:05:32.517 "large_pool_count": 1024, 00:05:32.517 "small_bufsize": 8192, 00:05:32.517 "large_bufsize": 135168 00:05:32.517 } 00:05:32.517 } 00:05:32.517 ] 00:05:32.517 }, 00:05:32.517 { 00:05:32.517 "subsystem": "sock", 00:05:32.517 "config": [ 00:05:32.517 { 00:05:32.517 "method": "sock_set_default_impl", 00:05:32.517 "params": { 00:05:32.517 "impl_name": "posix" 00:05:32.517 } 00:05:32.517 }, 00:05:32.517 { 00:05:32.517 "method": "sock_impl_set_options", 00:05:32.517 "params": { 00:05:32.517 "impl_name": "ssl", 00:05:32.517 "recv_buf_size": 4096, 00:05:32.517 "send_buf_size": 4096, 00:05:32.517 "enable_recv_pipe": true, 00:05:32.517 "enable_quickack": false, 00:05:32.517 "enable_placement_id": 0, 00:05:32.517 "enable_zerocopy_send_server": true, 00:05:32.517 "enable_zerocopy_send_client": false, 00:05:32.517 "zerocopy_threshold": 0, 00:05:32.517 "tls_version": 0, 00:05:32.517 "enable_ktls": false 00:05:32.517 } 00:05:32.517 }, 00:05:32.517 { 00:05:32.517 "method": "sock_impl_set_options", 00:05:32.517 "params": { 00:05:32.517 "impl_name": "posix", 00:05:32.517 "recv_buf_size": 2097152, 00:05:32.517 "send_buf_size": 2097152, 00:05:32.517 "enable_recv_pipe": true, 00:05:32.517 "enable_quickack": false, 00:05:32.517 "enable_placement_id": 0, 00:05:32.517 "enable_zerocopy_send_server": true, 00:05:32.517 "enable_zerocopy_send_client": false, 00:05:32.517 "zerocopy_threshold": 0, 00:05:32.517 "tls_version": 0, 00:05:32.517 "enable_ktls": false 00:05:32.517 } 00:05:32.517 } 00:05:32.517 ] 00:05:32.517 }, 00:05:32.517 { 00:05:32.517 "subsystem": "vmd", 00:05:32.517 "config": [] 00:05:32.517 }, 00:05:32.517 { 00:05:32.518 "subsystem": "accel", 00:05:32.518 "config": [ 00:05:32.518 { 00:05:32.518 "method": "accel_set_options", 00:05:32.518 "params": { 00:05:32.518 "small_cache_size": 128, 00:05:32.518 "large_cache_size": 16, 00:05:32.518 "task_count": 2048, 00:05:32.518 "sequence_count": 2048, 00:05:32.518 "buf_count": 2048 00:05:32.518 } 00:05:32.518 } 00:05:32.518 ] 00:05:32.518 }, 00:05:32.518 { 00:05:32.518 "subsystem": "bdev", 00:05:32.518 "config": [ 00:05:32.518 { 00:05:32.518 "method": "bdev_set_options", 00:05:32.518 "params": { 00:05:32.518 "bdev_io_pool_size": 65535, 00:05:32.518 "bdev_io_cache_size": 256, 00:05:32.518 "bdev_auto_examine": true, 00:05:32.518 "iobuf_small_cache_size": 128, 00:05:32.518 "iobuf_large_cache_size": 16 00:05:32.518 } 00:05:32.518 }, 00:05:32.518 { 00:05:32.518 "method": "bdev_raid_set_options", 00:05:32.518 "params": { 00:05:32.518 "process_window_size_kb": 1024, 00:05:32.518 "process_max_bandwidth_mb_sec": 0 00:05:32.518 } 00:05:32.518 }, 00:05:32.518 { 00:05:32.518 "method": "bdev_iscsi_set_options", 00:05:32.518 "params": { 00:05:32.518 "timeout_sec": 30 00:05:32.518 } 00:05:32.518 }, 00:05:32.518 { 00:05:32.518 "method": "bdev_nvme_set_options", 00:05:32.518 "params": { 00:05:32.518 "action_on_timeout": "none", 00:05:32.518 "timeout_us": 0, 00:05:32.518 "timeout_admin_us": 0, 00:05:32.518 "keep_alive_timeout_ms": 10000, 00:05:32.518 "arbitration_burst": 0, 00:05:32.518 "low_priority_weight": 0, 00:05:32.518 "medium_priority_weight": 0, 00:05:32.518 "high_priority_weight": 0, 00:05:32.518 "nvme_adminq_poll_period_us": 10000, 00:05:32.518 "nvme_ioq_poll_period_us": 0, 00:05:32.518 "io_queue_requests": 0, 00:05:32.518 "delay_cmd_submit": true, 00:05:32.518 "transport_retry_count": 4, 00:05:32.518 "bdev_retry_count": 3, 00:05:32.518 "transport_ack_timeout": 0, 00:05:32.518 "ctrlr_loss_timeout_sec": 0, 00:05:32.518 "reconnect_delay_sec": 0, 00:05:32.518 "fast_io_fail_timeout_sec": 0, 00:05:32.518 "disable_auto_failback": false, 00:05:32.518 "generate_uuids": false, 00:05:32.518 "transport_tos": 0, 00:05:32.518 "nvme_error_stat": false, 00:05:32.518 "rdma_srq_size": 0, 00:05:32.518 "io_path_stat": false, 00:05:32.518 "allow_accel_sequence": false, 00:05:32.518 "rdma_max_cq_size": 0, 00:05:32.518 "rdma_cm_event_timeout_ms": 0, 00:05:32.518 "dhchap_digests": [ 00:05:32.518 "sha256", 00:05:32.518 "sha384", 00:05:32.518 "sha512" 00:05:32.518 ], 00:05:32.518 "dhchap_dhgroups": [ 00:05:32.518 "null", 00:05:32.518 "ffdhe2048", 00:05:32.518 "ffdhe3072", 00:05:32.518 "ffdhe4096", 00:05:32.518 "ffdhe6144", 00:05:32.518 "ffdhe8192" 00:05:32.518 ] 00:05:32.518 } 00:05:32.518 }, 00:05:32.518 { 00:05:32.518 "method": "bdev_nvme_set_hotplug", 00:05:32.518 "params": { 00:05:32.518 "period_us": 100000, 00:05:32.518 "enable": false 00:05:32.518 } 00:05:32.518 }, 00:05:32.518 { 00:05:32.518 "method": "bdev_wait_for_examine" 00:05:32.518 } 00:05:32.518 ] 00:05:32.518 }, 00:05:32.518 { 00:05:32.518 "subsystem": "scsi", 00:05:32.518 "config": null 00:05:32.518 }, 00:05:32.518 { 00:05:32.518 "subsystem": "scheduler", 00:05:32.518 "config": [ 00:05:32.518 { 00:05:32.518 "method": "framework_set_scheduler", 00:05:32.518 "params": { 00:05:32.518 "name": "static" 00:05:32.518 } 00:05:32.518 } 00:05:32.518 ] 00:05:32.518 }, 00:05:32.518 { 00:05:32.518 "subsystem": "vhost_scsi", 00:05:32.518 "config": [] 00:05:32.518 }, 00:05:32.518 { 00:05:32.518 "subsystem": "vhost_blk", 00:05:32.518 "config": [] 00:05:32.518 }, 00:05:32.518 { 00:05:32.518 "subsystem": "ublk", 00:05:32.518 "config": [] 00:05:32.518 }, 00:05:32.518 { 00:05:32.518 "subsystem": "nbd", 00:05:32.518 "config": [] 00:05:32.518 }, 00:05:32.518 { 00:05:32.518 "subsystem": "nvmf", 00:05:32.518 "config": [ 00:05:32.518 { 00:05:32.518 "method": "nvmf_set_config", 00:05:32.518 "params": { 00:05:32.518 "discovery_filter": "match_any", 00:05:32.518 "admin_cmd_passthru": { 00:05:32.518 "identify_ctrlr": false 00:05:32.518 }, 00:05:32.518 "dhchap_digests": [ 00:05:32.518 "sha256", 00:05:32.518 "sha384", 00:05:32.518 "sha512" 00:05:32.518 ], 00:05:32.518 "dhchap_dhgroups": [ 00:05:32.518 "null", 00:05:32.518 "ffdhe2048", 00:05:32.518 "ffdhe3072", 00:05:32.518 "ffdhe4096", 00:05:32.518 "ffdhe6144", 00:05:32.518 "ffdhe8192" 00:05:32.518 ] 00:05:32.518 } 00:05:32.518 }, 00:05:32.518 { 00:05:32.518 "method": "nvmf_set_max_subsystems", 00:05:32.518 "params": { 00:05:32.518 "max_subsystems": 1024 00:05:32.518 } 00:05:32.518 }, 00:05:32.518 { 00:05:32.518 "method": "nvmf_set_crdt", 00:05:32.518 "params": { 00:05:32.518 "crdt1": 0, 00:05:32.518 "crdt2": 0, 00:05:32.518 "crdt3": 0 00:05:32.518 } 00:05:32.518 }, 00:05:32.518 { 00:05:32.518 "method": "nvmf_create_transport", 00:05:32.518 "params": { 00:05:32.518 "trtype": "TCP", 00:05:32.518 "max_queue_depth": 128, 00:05:32.518 "max_io_qpairs_per_ctrlr": 127, 00:05:32.518 "in_capsule_data_size": 4096, 00:05:32.518 "max_io_size": 131072, 00:05:32.518 "io_unit_size": 131072, 00:05:32.518 "max_aq_depth": 128, 00:05:32.518 "num_shared_buffers": 511, 00:05:32.518 "buf_cache_size": 4294967295, 00:05:32.518 "dif_insert_or_strip": false, 00:05:32.518 "zcopy": false, 00:05:32.518 "c2h_success": true, 00:05:32.518 "sock_priority": 0, 00:05:32.518 "abort_timeout_sec": 1, 00:05:32.518 "ack_timeout": 0, 00:05:32.518 "data_wr_pool_size": 0 00:05:32.518 } 00:05:32.518 } 00:05:32.518 ] 00:05:32.518 }, 00:05:32.518 { 00:05:32.518 "subsystem": "iscsi", 00:05:32.518 "config": [ 00:05:32.518 { 00:05:32.518 "method": "iscsi_set_options", 00:05:32.518 "params": { 00:05:32.518 "node_base": "iqn.2016-06.io.spdk", 00:05:32.518 "max_sessions": 128, 00:05:32.518 "max_connections_per_session": 2, 00:05:32.518 "max_queue_depth": 64, 00:05:32.518 "default_time2wait": 2, 00:05:32.518 "default_time2retain": 20, 00:05:32.518 "first_burst_length": 8192, 00:05:32.518 "immediate_data": true, 00:05:32.518 "allow_duplicated_isid": false, 00:05:32.518 "error_recovery_level": 0, 00:05:32.518 "nop_timeout": 60, 00:05:32.518 "nop_in_interval": 30, 00:05:32.518 "disable_chap": false, 00:05:32.518 "require_chap": false, 00:05:32.519 "mutual_chap": false, 00:05:32.519 "chap_group": 0, 00:05:32.519 "max_large_datain_per_connection": 64, 00:05:32.519 "max_r2t_per_connection": 4, 00:05:32.519 "pdu_pool_size": 36864, 00:05:32.519 "immediate_data_pool_size": 16384, 00:05:32.519 "data_out_pool_size": 2048 00:05:32.519 } 00:05:32.519 } 00:05:32.519 ] 00:05:32.519 } 00:05:32.519 ] 00:05:32.519 } 00:05:32.519 16:18:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:32.519 16:18:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69453 00:05:32.519 16:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69453 ']' 00:05:32.519 16:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69453 00:05:32.519 16:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:32.519 16:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.519 16:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69453 00:05:32.519 16:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.519 16:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.519 killing process with pid 69453 00:05:32.519 16:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69453' 00:05:32.519 16:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69453 00:05:32.519 16:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69453 00:05:32.777 16:18:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69482 00:05:32.777 16:18:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:32.777 16:18:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:38.073 16:18:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69482 00:05:38.073 16:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69482 ']' 00:05:38.073 16:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69482 00:05:38.073 16:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:38.073 16:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.073 16:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69482 00:05:38.073 16:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:38.073 16:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:38.073 killing process with pid 69482 00:05:38.073 16:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69482' 00:05:38.073 16:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69482 00:05:38.073 16:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69482 00:05:38.332 16:18:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:38.332 16:18:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:38.332 00:05:38.332 real 0m6.966s 00:05:38.332 user 0m6.441s 00:05:38.332 sys 0m0.794s 00:05:38.332 16:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.332 16:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.332 ************************************ 00:05:38.332 END TEST skip_rpc_with_json 00:05:38.332 ************************************ 00:05:38.332 16:18:30 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:38.332 16:18:30 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.332 16:18:30 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.332 16:18:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.332 ************************************ 00:05:38.332 START TEST skip_rpc_with_delay 00:05:38.332 ************************************ 00:05:38.332 16:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:38.332 16:18:30 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:38.332 16:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:38.332 16:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:38.332 16:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.332 16:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.332 16:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.332 16:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.332 16:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.332 16:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.332 16:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.332 16:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:38.333 16:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:38.592 [2024-11-28 16:18:30.135890] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:38.592 [2024-11-28 16:18:30.136018] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:38.592 16:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:38.592 16:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:38.592 16:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:38.592 16:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:38.592 00:05:38.592 real 0m0.160s 00:05:38.592 user 0m0.091s 00:05:38.592 sys 0m0.067s 00:05:38.592 16:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.592 16:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:38.592 ************************************ 00:05:38.592 END TEST skip_rpc_with_delay 00:05:38.592 ************************************ 00:05:38.592 16:18:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:38.592 16:18:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:38.592 16:18:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:38.592 16:18:30 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.592 16:18:30 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.592 16:18:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.592 ************************************ 00:05:38.592 START TEST exit_on_failed_rpc_init 00:05:38.592 ************************************ 00:05:38.592 16:18:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:38.592 16:18:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69592 00:05:38.592 16:18:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.592 16:18:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69592 00:05:38.592 16:18:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69592 ']' 00:05:38.592 16:18:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.592 16:18:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.592 16:18:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.592 16:18:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.592 16:18:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:38.851 [2024-11-28 16:18:30.362864] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:38.851 [2024-11-28 16:18:30.362972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69592 ] 00:05:38.851 [2024-11-28 16:18:30.521953] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.851 [2024-11-28 16:18:30.567230] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.418 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.418 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:39.418 16:18:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.418 16:18:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:39.418 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:39.418 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:39.418 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.418 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.418 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.418 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.418 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.419 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.419 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.419 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:39.419 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:39.677 [2024-11-28 16:18:31.274103] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:39.677 [2024-11-28 16:18:31.274238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69606 ] 00:05:39.677 [2024-11-28 16:18:31.434790] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.937 [2024-11-28 16:18:31.508501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.937 [2024-11-28 16:18:31.508606] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:39.937 [2024-11-28 16:18:31.508624] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:39.937 [2024-11-28 16:18:31.508658] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:39.937 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:39.937 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:39.937 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:39.937 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:39.937 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:39.937 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:39.937 16:18:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:39.937 16:18:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69592 00:05:39.937 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69592 ']' 00:05:39.937 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69592 00:05:39.937 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:39.937 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.937 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69592 00:05:40.195 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.195 killing process with pid 69592 00:05:40.195 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.195 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69592' 00:05:40.195 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69592 00:05:40.195 16:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69592 00:05:40.454 00:05:40.454 real 0m1.833s 00:05:40.454 user 0m1.981s 00:05:40.454 sys 0m0.564s 00:05:40.454 ************************************ 00:05:40.454 END TEST exit_on_failed_rpc_init 00:05:40.454 ************************************ 00:05:40.454 16:18:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.454 16:18:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.454 16:18:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:40.454 00:05:40.454 real 0m14.926s 00:05:40.454 user 0m13.764s 00:05:40.454 sys 0m2.080s 00:05:40.454 16:18:32 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.454 ************************************ 00:05:40.454 END TEST skip_rpc 00:05:40.454 ************************************ 00:05:40.454 16:18:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.712 16:18:32 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:40.712 16:18:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.712 16:18:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.712 16:18:32 -- common/autotest_common.sh@10 -- # set +x 00:05:40.712 ************************************ 00:05:40.712 START TEST rpc_client 00:05:40.713 ************************************ 00:05:40.713 16:18:32 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:40.713 * Looking for test storage... 00:05:40.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:40.713 16:18:32 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:40.713 16:18:32 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:40.713 16:18:32 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:40.713 16:18:32 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.713 16:18:32 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:40.713 16:18:32 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.713 16:18:32 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:40.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.713 --rc genhtml_branch_coverage=1 00:05:40.713 --rc genhtml_function_coverage=1 00:05:40.713 --rc genhtml_legend=1 00:05:40.713 --rc geninfo_all_blocks=1 00:05:40.713 --rc geninfo_unexecuted_blocks=1 00:05:40.713 00:05:40.713 ' 00:05:40.713 16:18:32 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:40.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.713 --rc genhtml_branch_coverage=1 00:05:40.713 --rc genhtml_function_coverage=1 00:05:40.713 --rc genhtml_legend=1 00:05:40.713 --rc geninfo_all_blocks=1 00:05:40.713 --rc geninfo_unexecuted_blocks=1 00:05:40.713 00:05:40.713 ' 00:05:40.713 16:18:32 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:40.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.713 --rc genhtml_branch_coverage=1 00:05:40.713 --rc genhtml_function_coverage=1 00:05:40.713 --rc genhtml_legend=1 00:05:40.713 --rc geninfo_all_blocks=1 00:05:40.713 --rc geninfo_unexecuted_blocks=1 00:05:40.713 00:05:40.713 ' 00:05:40.713 16:18:32 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:40.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.713 --rc genhtml_branch_coverage=1 00:05:40.713 --rc genhtml_function_coverage=1 00:05:40.713 --rc genhtml_legend=1 00:05:40.713 --rc geninfo_all_blocks=1 00:05:40.713 --rc geninfo_unexecuted_blocks=1 00:05:40.713 00:05:40.713 ' 00:05:40.713 16:18:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:40.972 OK 00:05:40.972 16:18:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:40.972 00:05:40.972 real 0m0.311s 00:05:40.972 user 0m0.166s 00:05:40.972 sys 0m0.160s 00:05:40.972 16:18:32 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.972 16:18:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:40.972 ************************************ 00:05:40.972 END TEST rpc_client 00:05:40.972 ************************************ 00:05:40.972 16:18:32 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:40.972 16:18:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.972 16:18:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.972 16:18:32 -- common/autotest_common.sh@10 -- # set +x 00:05:40.972 ************************************ 00:05:40.972 START TEST json_config 00:05:40.972 ************************************ 00:05:40.972 16:18:32 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:40.972 16:18:32 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:40.972 16:18:32 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:40.972 16:18:32 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:41.232 16:18:32 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:41.232 16:18:32 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.232 16:18:32 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.232 16:18:32 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.232 16:18:32 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.232 16:18:32 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.232 16:18:32 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.232 16:18:32 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.232 16:18:32 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.232 16:18:32 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.232 16:18:32 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.232 16:18:32 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.232 16:18:32 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:41.232 16:18:32 json_config -- scripts/common.sh@345 -- # : 1 00:05:41.232 16:18:32 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.232 16:18:32 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.232 16:18:32 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:41.232 16:18:32 json_config -- scripts/common.sh@353 -- # local d=1 00:05:41.232 16:18:32 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.232 16:18:32 json_config -- scripts/common.sh@355 -- # echo 1 00:05:41.232 16:18:32 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.232 16:18:32 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:41.232 16:18:32 json_config -- scripts/common.sh@353 -- # local d=2 00:05:41.232 16:18:32 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.232 16:18:32 json_config -- scripts/common.sh@355 -- # echo 2 00:05:41.232 16:18:32 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.232 16:18:32 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.232 16:18:32 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.232 16:18:32 json_config -- scripts/common.sh@368 -- # return 0 00:05:41.232 16:18:32 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.232 16:18:32 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:41.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.232 --rc genhtml_branch_coverage=1 00:05:41.232 --rc genhtml_function_coverage=1 00:05:41.232 --rc genhtml_legend=1 00:05:41.232 --rc geninfo_all_blocks=1 00:05:41.232 --rc geninfo_unexecuted_blocks=1 00:05:41.232 00:05:41.232 ' 00:05:41.232 16:18:32 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:41.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.232 --rc genhtml_branch_coverage=1 00:05:41.232 --rc genhtml_function_coverage=1 00:05:41.232 --rc genhtml_legend=1 00:05:41.232 --rc geninfo_all_blocks=1 00:05:41.232 --rc geninfo_unexecuted_blocks=1 00:05:41.232 00:05:41.232 ' 00:05:41.232 16:18:32 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:41.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.232 --rc genhtml_branch_coverage=1 00:05:41.232 --rc genhtml_function_coverage=1 00:05:41.232 --rc genhtml_legend=1 00:05:41.232 --rc geninfo_all_blocks=1 00:05:41.232 --rc geninfo_unexecuted_blocks=1 00:05:41.232 00:05:41.232 ' 00:05:41.232 16:18:32 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:41.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.232 --rc genhtml_branch_coverage=1 00:05:41.232 --rc genhtml_function_coverage=1 00:05:41.232 --rc genhtml_legend=1 00:05:41.232 --rc geninfo_all_blocks=1 00:05:41.232 --rc geninfo_unexecuted_blocks=1 00:05:41.232 00:05:41.232 ' 00:05:41.232 16:18:32 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:41.232 16:18:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:41.232 16:18:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.232 16:18:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.232 16:18:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.232 16:18:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.232 16:18:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.232 16:18:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.232 16:18:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.232 16:18:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.232 16:18:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.232 16:18:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.232 16:18:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58a8556f-8060-4dbf-98a4-c4a47e6467c0 00:05:41.232 16:18:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=58a8556f-8060-4dbf-98a4-c4a47e6467c0 00:05:41.232 16:18:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.232 16:18:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.232 16:18:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.232 16:18:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.232 16:18:32 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:41.232 16:18:32 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:41.232 16:18:32 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.232 16:18:32 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.232 16:18:32 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.233 16:18:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.233 16:18:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.233 16:18:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.233 16:18:32 json_config -- paths/export.sh@5 -- # export PATH 00:05:41.233 16:18:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.233 16:18:32 json_config -- nvmf/common.sh@51 -- # : 0 00:05:41.233 16:18:32 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:41.233 16:18:32 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:41.233 16:18:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.233 16:18:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.233 16:18:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.233 16:18:32 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:41.233 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:41.233 16:18:32 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:41.233 16:18:32 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:41.233 16:18:32 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:41.233 16:18:32 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:41.233 16:18:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:41.233 16:18:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:41.233 16:18:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:41.233 16:18:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:41.233 16:18:32 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:41.233 WARNING: No tests are enabled so not running JSON configuration tests 00:05:41.233 16:18:32 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:41.233 00:05:41.233 real 0m0.236s 00:05:41.233 user 0m0.143s 00:05:41.233 sys 0m0.096s 00:05:41.233 ************************************ 00:05:41.233 END TEST json_config 00:05:41.233 ************************************ 00:05:41.233 16:18:32 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.233 16:18:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.233 16:18:32 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:41.233 16:18:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.233 16:18:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.233 16:18:32 -- common/autotest_common.sh@10 -- # set +x 00:05:41.233 ************************************ 00:05:41.233 START TEST json_config_extra_key 00:05:41.233 ************************************ 00:05:41.233 16:18:32 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:41.492 16:18:33 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:41.492 16:18:33 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:41.492 16:18:33 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:41.493 16:18:33 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:41.493 16:18:33 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.493 16:18:33 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:41.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.493 --rc genhtml_branch_coverage=1 00:05:41.493 --rc genhtml_function_coverage=1 00:05:41.493 --rc genhtml_legend=1 00:05:41.493 --rc geninfo_all_blocks=1 00:05:41.493 --rc geninfo_unexecuted_blocks=1 00:05:41.493 00:05:41.493 ' 00:05:41.493 16:18:33 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:41.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.493 --rc genhtml_branch_coverage=1 00:05:41.493 --rc genhtml_function_coverage=1 00:05:41.493 --rc genhtml_legend=1 00:05:41.493 --rc geninfo_all_blocks=1 00:05:41.493 --rc geninfo_unexecuted_blocks=1 00:05:41.493 00:05:41.493 ' 00:05:41.493 16:18:33 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:41.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.493 --rc genhtml_branch_coverage=1 00:05:41.493 --rc genhtml_function_coverage=1 00:05:41.493 --rc genhtml_legend=1 00:05:41.493 --rc geninfo_all_blocks=1 00:05:41.493 --rc geninfo_unexecuted_blocks=1 00:05:41.493 00:05:41.493 ' 00:05:41.493 16:18:33 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:41.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.493 --rc genhtml_branch_coverage=1 00:05:41.493 --rc genhtml_function_coverage=1 00:05:41.493 --rc genhtml_legend=1 00:05:41.493 --rc geninfo_all_blocks=1 00:05:41.493 --rc geninfo_unexecuted_blocks=1 00:05:41.493 00:05:41.493 ' 00:05:41.493 16:18:33 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58a8556f-8060-4dbf-98a4-c4a47e6467c0 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=58a8556f-8060-4dbf-98a4-c4a47e6467c0 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.493 16:18:33 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.493 16:18:33 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.493 16:18:33 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.493 16:18:33 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.493 16:18:33 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:41.493 16:18:33 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:41.493 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:41.493 16:18:33 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:41.493 16:18:33 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:41.493 16:18:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:41.493 16:18:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:41.493 16:18:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:41.493 16:18:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:41.493 16:18:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:41.493 16:18:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:41.493 16:18:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:41.493 16:18:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:41.493 16:18:33 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:41.493 16:18:33 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:41.493 INFO: launching applications... 00:05:41.493 16:18:33 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:41.493 16:18:33 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:41.493 16:18:33 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:41.493 16:18:33 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:41.493 16:18:33 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:41.493 16:18:33 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:41.493 16:18:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.493 16:18:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.493 16:18:33 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69794 00:05:41.493 16:18:33 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:41.494 Waiting for target to run... 00:05:41.494 16:18:33 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69794 /var/tmp/spdk_tgt.sock 00:05:41.494 16:18:33 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 69794 ']' 00:05:41.494 16:18:33 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.494 16:18:33 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.494 16:18:33 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:41.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.494 16:18:33 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.494 16:18:33 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.494 16:18:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:41.494 [2024-11-28 16:18:33.236335] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:41.494 [2024-11-28 16:18:33.236544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69794 ] 00:05:42.062 [2024-11-28 16:18:33.611775] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.062 [2024-11-28 16:18:33.641604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.321 00:05:42.321 INFO: shutting down applications... 00:05:42.321 16:18:34 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.321 16:18:34 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:42.321 16:18:34 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:42.321 16:18:34 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:42.321 16:18:34 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:42.321 16:18:34 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:42.321 16:18:34 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:42.321 16:18:34 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69794 ]] 00:05:42.321 16:18:34 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69794 00:05:42.321 16:18:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:42.321 16:18:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.321 16:18:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69794 00:05:42.321 16:18:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:42.889 16:18:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:42.889 16:18:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.889 16:18:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69794 00:05:42.889 16:18:34 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:42.889 16:18:34 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:42.889 16:18:34 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:42.889 16:18:34 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:42.889 SPDK target shutdown done 00:05:42.889 16:18:34 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:42.889 Success 00:05:42.889 00:05:42.889 real 0m1.628s 00:05:42.889 user 0m1.317s 00:05:42.889 sys 0m0.489s 00:05:42.889 16:18:34 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.889 16:18:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:42.889 ************************************ 00:05:42.889 END TEST json_config_extra_key 00:05:42.889 ************************************ 00:05:42.889 16:18:34 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:42.889 16:18:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.889 16:18:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.889 16:18:34 -- common/autotest_common.sh@10 -- # set +x 00:05:42.889 ************************************ 00:05:42.889 START TEST alias_rpc 00:05:42.889 ************************************ 00:05:42.889 16:18:34 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:43.148 * Looking for test storage... 00:05:43.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:43.148 16:18:34 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:43.148 16:18:34 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:43.148 16:18:34 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:43.148 16:18:34 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:43.148 16:18:34 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.148 16:18:34 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.148 16:18:34 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.148 16:18:34 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.148 16:18:34 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.148 16:18:34 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.148 16:18:34 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.148 16:18:34 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.148 16:18:34 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.148 16:18:34 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.148 16:18:34 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.148 16:18:34 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:43.148 16:18:34 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:43.148 16:18:34 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.148 16:18:34 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.148 16:18:34 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:43.148 16:18:34 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:43.148 16:18:34 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.149 16:18:34 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:43.149 16:18:34 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.149 16:18:34 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:43.149 16:18:34 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:43.149 16:18:34 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.149 16:18:34 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:43.149 16:18:34 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.149 16:18:34 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.149 16:18:34 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.149 16:18:34 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:43.149 16:18:34 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.149 16:18:34 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:43.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.149 --rc genhtml_branch_coverage=1 00:05:43.149 --rc genhtml_function_coverage=1 00:05:43.149 --rc genhtml_legend=1 00:05:43.149 --rc geninfo_all_blocks=1 00:05:43.149 --rc geninfo_unexecuted_blocks=1 00:05:43.149 00:05:43.149 ' 00:05:43.149 16:18:34 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:43.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.149 --rc genhtml_branch_coverage=1 00:05:43.149 --rc genhtml_function_coverage=1 00:05:43.149 --rc genhtml_legend=1 00:05:43.149 --rc geninfo_all_blocks=1 00:05:43.149 --rc geninfo_unexecuted_blocks=1 00:05:43.149 00:05:43.149 ' 00:05:43.149 16:18:34 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:43.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.149 --rc genhtml_branch_coverage=1 00:05:43.149 --rc genhtml_function_coverage=1 00:05:43.149 --rc genhtml_legend=1 00:05:43.149 --rc geninfo_all_blocks=1 00:05:43.149 --rc geninfo_unexecuted_blocks=1 00:05:43.149 00:05:43.149 ' 00:05:43.149 16:18:34 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:43.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.149 --rc genhtml_branch_coverage=1 00:05:43.149 --rc genhtml_function_coverage=1 00:05:43.149 --rc genhtml_legend=1 00:05:43.149 --rc geninfo_all_blocks=1 00:05:43.149 --rc geninfo_unexecuted_blocks=1 00:05:43.149 00:05:43.149 ' 00:05:43.149 16:18:34 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:43.149 16:18:34 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69873 00:05:43.149 16:18:34 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.149 16:18:34 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69873 00:05:43.149 16:18:34 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 69873 ']' 00:05:43.149 16:18:34 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.149 16:18:34 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.149 16:18:34 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.149 16:18:34 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.149 16:18:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.409 [2024-11-28 16:18:34.930154] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:43.409 [2024-11-28 16:18:34.930638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69873 ] 00:05:43.409 [2024-11-28 16:18:35.090064] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.409 [2024-11-28 16:18:35.134979] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.976 16:18:35 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.976 16:18:35 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:43.976 16:18:35 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:44.234 16:18:35 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69873 00:05:44.234 16:18:35 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 69873 ']' 00:05:44.234 16:18:35 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 69873 00:05:44.234 16:18:35 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:44.234 16:18:35 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:44.234 16:18:35 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69873 00:05:44.493 16:18:36 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:44.493 16:18:36 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:44.493 16:18:36 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69873' 00:05:44.493 killing process with pid 69873 00:05:44.493 16:18:36 alias_rpc -- common/autotest_common.sh@969 -- # kill 69873 00:05:44.493 16:18:36 alias_rpc -- common/autotest_common.sh@974 -- # wait 69873 00:05:44.753 00:05:44.753 real 0m1.798s 00:05:44.753 user 0m1.778s 00:05:44.753 sys 0m0.544s 00:05:44.753 16:18:36 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.753 16:18:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.753 ************************************ 00:05:44.753 END TEST alias_rpc 00:05:44.753 ************************************ 00:05:44.753 16:18:36 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:44.753 16:18:36 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:44.753 16:18:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.753 16:18:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.753 16:18:36 -- common/autotest_common.sh@10 -- # set +x 00:05:44.753 ************************************ 00:05:44.753 START TEST spdkcli_tcp 00:05:44.753 ************************************ 00:05:44.753 16:18:36 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:45.012 * Looking for test storage... 00:05:45.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:45.012 16:18:36 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:45.012 16:18:36 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:45.012 16:18:36 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:45.012 16:18:36 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.012 16:18:36 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:45.012 16:18:36 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.012 16:18:36 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:45.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.012 --rc genhtml_branch_coverage=1 00:05:45.012 --rc genhtml_function_coverage=1 00:05:45.012 --rc genhtml_legend=1 00:05:45.012 --rc geninfo_all_blocks=1 00:05:45.012 --rc geninfo_unexecuted_blocks=1 00:05:45.012 00:05:45.012 ' 00:05:45.012 16:18:36 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:45.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.012 --rc genhtml_branch_coverage=1 00:05:45.012 --rc genhtml_function_coverage=1 00:05:45.012 --rc genhtml_legend=1 00:05:45.012 --rc geninfo_all_blocks=1 00:05:45.012 --rc geninfo_unexecuted_blocks=1 00:05:45.012 00:05:45.012 ' 00:05:45.012 16:18:36 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:45.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.012 --rc genhtml_branch_coverage=1 00:05:45.012 --rc genhtml_function_coverage=1 00:05:45.012 --rc genhtml_legend=1 00:05:45.012 --rc geninfo_all_blocks=1 00:05:45.012 --rc geninfo_unexecuted_blocks=1 00:05:45.012 00:05:45.012 ' 00:05:45.012 16:18:36 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:45.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.012 --rc genhtml_branch_coverage=1 00:05:45.012 --rc genhtml_function_coverage=1 00:05:45.012 --rc genhtml_legend=1 00:05:45.012 --rc geninfo_all_blocks=1 00:05:45.012 --rc geninfo_unexecuted_blocks=1 00:05:45.012 00:05:45.012 ' 00:05:45.012 16:18:36 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:45.012 16:18:36 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:45.012 16:18:36 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:45.012 16:18:36 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:45.012 16:18:36 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:45.012 16:18:36 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:45.012 16:18:36 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:45.012 16:18:36 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:45.012 16:18:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.012 16:18:36 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69958 00:05:45.012 16:18:36 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:45.012 16:18:36 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69958 00:05:45.012 16:18:36 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 69958 ']' 00:05:45.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.012 16:18:36 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.012 16:18:36 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.012 16:18:36 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.012 16:18:36 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.012 16:18:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.271 [2024-11-28 16:18:36.826152] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:45.271 [2024-11-28 16:18:36.826969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69958 ] 00:05:45.271 [2024-11-28 16:18:36.991090] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.271 [2024-11-28 16:18:37.038531] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.271 [2024-11-28 16:18:37.038584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.841 16:18:37 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.841 16:18:37 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:45.841 16:18:37 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:45.841 16:18:37 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69968 00:05:45.841 16:18:37 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:46.102 [ 00:05:46.102 "bdev_malloc_delete", 00:05:46.102 "bdev_malloc_create", 00:05:46.102 "bdev_null_resize", 00:05:46.102 "bdev_null_delete", 00:05:46.102 "bdev_null_create", 00:05:46.102 "bdev_nvme_cuse_unregister", 00:05:46.102 "bdev_nvme_cuse_register", 00:05:46.102 "bdev_opal_new_user", 00:05:46.102 "bdev_opal_set_lock_state", 00:05:46.102 "bdev_opal_delete", 00:05:46.102 "bdev_opal_get_info", 00:05:46.102 "bdev_opal_create", 00:05:46.102 "bdev_nvme_opal_revert", 00:05:46.102 "bdev_nvme_opal_init", 00:05:46.102 "bdev_nvme_send_cmd", 00:05:46.102 "bdev_nvme_set_keys", 00:05:46.102 "bdev_nvme_get_path_iostat", 00:05:46.102 "bdev_nvme_get_mdns_discovery_info", 00:05:46.102 "bdev_nvme_stop_mdns_discovery", 00:05:46.102 "bdev_nvme_start_mdns_discovery", 00:05:46.102 "bdev_nvme_set_multipath_policy", 00:05:46.102 "bdev_nvme_set_preferred_path", 00:05:46.102 "bdev_nvme_get_io_paths", 00:05:46.102 "bdev_nvme_remove_error_injection", 00:05:46.102 "bdev_nvme_add_error_injection", 00:05:46.102 "bdev_nvme_get_discovery_info", 00:05:46.102 "bdev_nvme_stop_discovery", 00:05:46.102 "bdev_nvme_start_discovery", 00:05:46.102 "bdev_nvme_get_controller_health_info", 00:05:46.102 "bdev_nvme_disable_controller", 00:05:46.102 "bdev_nvme_enable_controller", 00:05:46.102 "bdev_nvme_reset_controller", 00:05:46.102 "bdev_nvme_get_transport_statistics", 00:05:46.102 "bdev_nvme_apply_firmware", 00:05:46.102 "bdev_nvme_detach_controller", 00:05:46.102 "bdev_nvme_get_controllers", 00:05:46.102 "bdev_nvme_attach_controller", 00:05:46.102 "bdev_nvme_set_hotplug", 00:05:46.102 "bdev_nvme_set_options", 00:05:46.102 "bdev_passthru_delete", 00:05:46.102 "bdev_passthru_create", 00:05:46.102 "bdev_lvol_set_parent_bdev", 00:05:46.102 "bdev_lvol_set_parent", 00:05:46.102 "bdev_lvol_check_shallow_copy", 00:05:46.102 "bdev_lvol_start_shallow_copy", 00:05:46.102 "bdev_lvol_grow_lvstore", 00:05:46.102 "bdev_lvol_get_lvols", 00:05:46.102 "bdev_lvol_get_lvstores", 00:05:46.102 "bdev_lvol_delete", 00:05:46.102 "bdev_lvol_set_read_only", 00:05:46.102 "bdev_lvol_resize", 00:05:46.102 "bdev_lvol_decouple_parent", 00:05:46.102 "bdev_lvol_inflate", 00:05:46.102 "bdev_lvol_rename", 00:05:46.102 "bdev_lvol_clone_bdev", 00:05:46.102 "bdev_lvol_clone", 00:05:46.102 "bdev_lvol_snapshot", 00:05:46.102 "bdev_lvol_create", 00:05:46.102 "bdev_lvol_delete_lvstore", 00:05:46.102 "bdev_lvol_rename_lvstore", 00:05:46.102 "bdev_lvol_create_lvstore", 00:05:46.102 "bdev_raid_set_options", 00:05:46.102 "bdev_raid_remove_base_bdev", 00:05:46.102 "bdev_raid_add_base_bdev", 00:05:46.102 "bdev_raid_delete", 00:05:46.102 "bdev_raid_create", 00:05:46.102 "bdev_raid_get_bdevs", 00:05:46.102 "bdev_error_inject_error", 00:05:46.102 "bdev_error_delete", 00:05:46.102 "bdev_error_create", 00:05:46.102 "bdev_split_delete", 00:05:46.102 "bdev_split_create", 00:05:46.102 "bdev_delay_delete", 00:05:46.102 "bdev_delay_create", 00:05:46.102 "bdev_delay_update_latency", 00:05:46.102 "bdev_zone_block_delete", 00:05:46.102 "bdev_zone_block_create", 00:05:46.102 "blobfs_create", 00:05:46.103 "blobfs_detect", 00:05:46.103 "blobfs_set_cache_size", 00:05:46.103 "bdev_aio_delete", 00:05:46.103 "bdev_aio_rescan", 00:05:46.103 "bdev_aio_create", 00:05:46.103 "bdev_ftl_set_property", 00:05:46.103 "bdev_ftl_get_properties", 00:05:46.103 "bdev_ftl_get_stats", 00:05:46.103 "bdev_ftl_unmap", 00:05:46.103 "bdev_ftl_unload", 00:05:46.103 "bdev_ftl_delete", 00:05:46.103 "bdev_ftl_load", 00:05:46.103 "bdev_ftl_create", 00:05:46.103 "bdev_virtio_attach_controller", 00:05:46.103 "bdev_virtio_scsi_get_devices", 00:05:46.103 "bdev_virtio_detach_controller", 00:05:46.103 "bdev_virtio_blk_set_hotplug", 00:05:46.103 "bdev_iscsi_delete", 00:05:46.103 "bdev_iscsi_create", 00:05:46.103 "bdev_iscsi_set_options", 00:05:46.103 "accel_error_inject_error", 00:05:46.103 "ioat_scan_accel_module", 00:05:46.103 "dsa_scan_accel_module", 00:05:46.103 "iaa_scan_accel_module", 00:05:46.103 "keyring_file_remove_key", 00:05:46.103 "keyring_file_add_key", 00:05:46.103 "keyring_linux_set_options", 00:05:46.103 "fsdev_aio_delete", 00:05:46.103 "fsdev_aio_create", 00:05:46.103 "iscsi_get_histogram", 00:05:46.103 "iscsi_enable_histogram", 00:05:46.103 "iscsi_set_options", 00:05:46.103 "iscsi_get_auth_groups", 00:05:46.103 "iscsi_auth_group_remove_secret", 00:05:46.103 "iscsi_auth_group_add_secret", 00:05:46.103 "iscsi_delete_auth_group", 00:05:46.103 "iscsi_create_auth_group", 00:05:46.103 "iscsi_set_discovery_auth", 00:05:46.103 "iscsi_get_options", 00:05:46.103 "iscsi_target_node_request_logout", 00:05:46.103 "iscsi_target_node_set_redirect", 00:05:46.103 "iscsi_target_node_set_auth", 00:05:46.103 "iscsi_target_node_add_lun", 00:05:46.103 "iscsi_get_stats", 00:05:46.103 "iscsi_get_connections", 00:05:46.103 "iscsi_portal_group_set_auth", 00:05:46.103 "iscsi_start_portal_group", 00:05:46.103 "iscsi_delete_portal_group", 00:05:46.103 "iscsi_create_portal_group", 00:05:46.103 "iscsi_get_portal_groups", 00:05:46.103 "iscsi_delete_target_node", 00:05:46.103 "iscsi_target_node_remove_pg_ig_maps", 00:05:46.103 "iscsi_target_node_add_pg_ig_maps", 00:05:46.103 "iscsi_create_target_node", 00:05:46.103 "iscsi_get_target_nodes", 00:05:46.103 "iscsi_delete_initiator_group", 00:05:46.103 "iscsi_initiator_group_remove_initiators", 00:05:46.103 "iscsi_initiator_group_add_initiators", 00:05:46.103 "iscsi_create_initiator_group", 00:05:46.103 "iscsi_get_initiator_groups", 00:05:46.103 "nvmf_set_crdt", 00:05:46.103 "nvmf_set_config", 00:05:46.103 "nvmf_set_max_subsystems", 00:05:46.103 "nvmf_stop_mdns_prr", 00:05:46.103 "nvmf_publish_mdns_prr", 00:05:46.103 "nvmf_subsystem_get_listeners", 00:05:46.103 "nvmf_subsystem_get_qpairs", 00:05:46.103 "nvmf_subsystem_get_controllers", 00:05:46.103 "nvmf_get_stats", 00:05:46.103 "nvmf_get_transports", 00:05:46.103 "nvmf_create_transport", 00:05:46.103 "nvmf_get_targets", 00:05:46.103 "nvmf_delete_target", 00:05:46.103 "nvmf_create_target", 00:05:46.103 "nvmf_subsystem_allow_any_host", 00:05:46.103 "nvmf_subsystem_set_keys", 00:05:46.103 "nvmf_subsystem_remove_host", 00:05:46.103 "nvmf_subsystem_add_host", 00:05:46.103 "nvmf_ns_remove_host", 00:05:46.103 "nvmf_ns_add_host", 00:05:46.103 "nvmf_subsystem_remove_ns", 00:05:46.103 "nvmf_subsystem_set_ns_ana_group", 00:05:46.103 "nvmf_subsystem_add_ns", 00:05:46.103 "nvmf_subsystem_listener_set_ana_state", 00:05:46.103 "nvmf_discovery_get_referrals", 00:05:46.103 "nvmf_discovery_remove_referral", 00:05:46.103 "nvmf_discovery_add_referral", 00:05:46.103 "nvmf_subsystem_remove_listener", 00:05:46.103 "nvmf_subsystem_add_listener", 00:05:46.103 "nvmf_delete_subsystem", 00:05:46.103 "nvmf_create_subsystem", 00:05:46.103 "nvmf_get_subsystems", 00:05:46.103 "env_dpdk_get_mem_stats", 00:05:46.103 "nbd_get_disks", 00:05:46.103 "nbd_stop_disk", 00:05:46.103 "nbd_start_disk", 00:05:46.103 "ublk_recover_disk", 00:05:46.103 "ublk_get_disks", 00:05:46.103 "ublk_stop_disk", 00:05:46.103 "ublk_start_disk", 00:05:46.103 "ublk_destroy_target", 00:05:46.103 "ublk_create_target", 00:05:46.103 "virtio_blk_create_transport", 00:05:46.103 "virtio_blk_get_transports", 00:05:46.103 "vhost_controller_set_coalescing", 00:05:46.103 "vhost_get_controllers", 00:05:46.103 "vhost_delete_controller", 00:05:46.103 "vhost_create_blk_controller", 00:05:46.103 "vhost_scsi_controller_remove_target", 00:05:46.103 "vhost_scsi_controller_add_target", 00:05:46.103 "vhost_start_scsi_controller", 00:05:46.103 "vhost_create_scsi_controller", 00:05:46.103 "thread_set_cpumask", 00:05:46.103 "scheduler_set_options", 00:05:46.103 "framework_get_governor", 00:05:46.103 "framework_get_scheduler", 00:05:46.103 "framework_set_scheduler", 00:05:46.103 "framework_get_reactors", 00:05:46.103 "thread_get_io_channels", 00:05:46.103 "thread_get_pollers", 00:05:46.103 "thread_get_stats", 00:05:46.103 "framework_monitor_context_switch", 00:05:46.103 "spdk_kill_instance", 00:05:46.103 "log_enable_timestamps", 00:05:46.103 "log_get_flags", 00:05:46.103 "log_clear_flag", 00:05:46.103 "log_set_flag", 00:05:46.103 "log_get_level", 00:05:46.103 "log_set_level", 00:05:46.103 "log_get_print_level", 00:05:46.103 "log_set_print_level", 00:05:46.103 "framework_enable_cpumask_locks", 00:05:46.103 "framework_disable_cpumask_locks", 00:05:46.103 "framework_wait_init", 00:05:46.103 "framework_start_init", 00:05:46.103 "scsi_get_devices", 00:05:46.103 "bdev_get_histogram", 00:05:46.103 "bdev_enable_histogram", 00:05:46.103 "bdev_set_qos_limit", 00:05:46.103 "bdev_set_qd_sampling_period", 00:05:46.103 "bdev_get_bdevs", 00:05:46.103 "bdev_reset_iostat", 00:05:46.103 "bdev_get_iostat", 00:05:46.103 "bdev_examine", 00:05:46.103 "bdev_wait_for_examine", 00:05:46.103 "bdev_set_options", 00:05:46.103 "accel_get_stats", 00:05:46.103 "accel_set_options", 00:05:46.103 "accel_set_driver", 00:05:46.103 "accel_crypto_key_destroy", 00:05:46.103 "accel_crypto_keys_get", 00:05:46.103 "accel_crypto_key_create", 00:05:46.103 "accel_assign_opc", 00:05:46.103 "accel_get_module_info", 00:05:46.103 "accel_get_opc_assignments", 00:05:46.103 "vmd_rescan", 00:05:46.103 "vmd_remove_device", 00:05:46.103 "vmd_enable", 00:05:46.103 "sock_get_default_impl", 00:05:46.103 "sock_set_default_impl", 00:05:46.103 "sock_impl_set_options", 00:05:46.103 "sock_impl_get_options", 00:05:46.103 "iobuf_get_stats", 00:05:46.103 "iobuf_set_options", 00:05:46.103 "keyring_get_keys", 00:05:46.103 "framework_get_pci_devices", 00:05:46.103 "framework_get_config", 00:05:46.103 "framework_get_subsystems", 00:05:46.103 "fsdev_set_opts", 00:05:46.103 "fsdev_get_opts", 00:05:46.103 "trace_get_info", 00:05:46.103 "trace_get_tpoint_group_mask", 00:05:46.103 "trace_disable_tpoint_group", 00:05:46.103 "trace_enable_tpoint_group", 00:05:46.103 "trace_clear_tpoint_mask", 00:05:46.103 "trace_set_tpoint_mask", 00:05:46.103 "notify_get_notifications", 00:05:46.103 "notify_get_types", 00:05:46.103 "spdk_get_version", 00:05:46.103 "rpc_get_methods" 00:05:46.103 ] 00:05:46.103 16:18:37 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:46.103 16:18:37 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:46.103 16:18:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.103 16:18:37 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:46.103 16:18:37 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69958 00:05:46.103 16:18:37 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 69958 ']' 00:05:46.103 16:18:37 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 69958 00:05:46.103 16:18:37 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:46.103 16:18:37 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.103 16:18:37 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69958 00:05:46.363 16:18:37 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.363 16:18:37 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.363 16:18:37 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69958' 00:05:46.363 killing process with pid 69958 00:05:46.363 16:18:37 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 69958 00:05:46.363 16:18:37 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 69958 00:05:46.623 00:05:46.623 real 0m1.825s 00:05:46.623 user 0m2.918s 00:05:46.623 sys 0m0.603s 00:05:46.623 16:18:38 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.623 16:18:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.623 ************************************ 00:05:46.623 END TEST spdkcli_tcp 00:05:46.623 ************************************ 00:05:46.623 16:18:38 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.623 16:18:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.623 16:18:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.623 16:18:38 -- common/autotest_common.sh@10 -- # set +x 00:05:46.623 ************************************ 00:05:46.623 START TEST dpdk_mem_utility 00:05:46.623 ************************************ 00:05:46.623 16:18:38 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.884 * Looking for test storage... 00:05:46.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:46.884 16:18:38 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:46.884 16:18:38 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:46.884 16:18:38 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:46.884 16:18:38 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.884 16:18:38 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:46.884 16:18:38 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.884 16:18:38 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:46.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.884 --rc genhtml_branch_coverage=1 00:05:46.884 --rc genhtml_function_coverage=1 00:05:46.884 --rc genhtml_legend=1 00:05:46.884 --rc geninfo_all_blocks=1 00:05:46.884 --rc geninfo_unexecuted_blocks=1 00:05:46.884 00:05:46.884 ' 00:05:46.884 16:18:38 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:46.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.884 --rc genhtml_branch_coverage=1 00:05:46.884 --rc genhtml_function_coverage=1 00:05:46.884 --rc genhtml_legend=1 00:05:46.884 --rc geninfo_all_blocks=1 00:05:46.884 --rc geninfo_unexecuted_blocks=1 00:05:46.884 00:05:46.884 ' 00:05:46.884 16:18:38 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:46.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.884 --rc genhtml_branch_coverage=1 00:05:46.884 --rc genhtml_function_coverage=1 00:05:46.884 --rc genhtml_legend=1 00:05:46.884 --rc geninfo_all_blocks=1 00:05:46.884 --rc geninfo_unexecuted_blocks=1 00:05:46.884 00:05:46.884 ' 00:05:46.884 16:18:38 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:46.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.884 --rc genhtml_branch_coverage=1 00:05:46.884 --rc genhtml_function_coverage=1 00:05:46.884 --rc genhtml_legend=1 00:05:46.884 --rc geninfo_all_blocks=1 00:05:46.884 --rc geninfo_unexecuted_blocks=1 00:05:46.884 00:05:46.884 ' 00:05:46.884 16:18:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:46.884 16:18:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70047 00:05:46.884 16:18:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.884 16:18:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70047 00:05:46.884 16:18:38 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70047 ']' 00:05:46.884 16:18:38 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.884 16:18:38 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.884 16:18:38 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.884 16:18:38 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.884 16:18:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:47.143 [2024-11-28 16:18:38.701460] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:47.143 [2024-11-28 16:18:38.701665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70047 ] 00:05:47.143 [2024-11-28 16:18:38.860500] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.143 [2024-11-28 16:18:38.905878] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.085 16:18:39 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.085 16:18:39 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:48.085 16:18:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:48.085 16:18:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:48.085 16:18:39 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.085 16:18:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:48.085 { 00:05:48.085 "filename": "/tmp/spdk_mem_dump.txt" 00:05:48.085 } 00:05:48.085 16:18:39 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.085 16:18:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:48.085 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:48.085 1 heaps totaling size 860.000000 MiB 00:05:48.085 size: 860.000000 MiB heap id: 0 00:05:48.085 end heaps---------- 00:05:48.085 9 mempools totaling size 642.649841 MiB 00:05:48.085 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:48.085 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:48.085 size: 92.545471 MiB name: bdev_io_70047 00:05:48.085 size: 51.011292 MiB name: evtpool_70047 00:05:48.085 size: 50.003479 MiB name: msgpool_70047 00:05:48.085 size: 36.509338 MiB name: fsdev_io_70047 00:05:48.085 size: 21.763794 MiB name: PDU_Pool 00:05:48.085 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:48.085 size: 0.026123 MiB name: Session_Pool 00:05:48.085 end mempools------- 00:05:48.085 6 memzones totaling size 4.142822 MiB 00:05:48.085 size: 1.000366 MiB name: RG_ring_0_70047 00:05:48.085 size: 1.000366 MiB name: RG_ring_1_70047 00:05:48.085 size: 1.000366 MiB name: RG_ring_4_70047 00:05:48.085 size: 1.000366 MiB name: RG_ring_5_70047 00:05:48.085 size: 0.125366 MiB name: RG_ring_2_70047 00:05:48.085 size: 0.015991 MiB name: RG_ring_3_70047 00:05:48.085 end memzones------- 00:05:48.085 16:18:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:48.085 heap id: 0 total size: 860.000000 MiB number of busy elements: 306 number of free elements: 16 00:05:48.085 list of free elements. size: 13.936707 MiB 00:05:48.085 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:48.085 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:48.085 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:48.085 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:48.085 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:48.085 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:48.085 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:48.085 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:48.085 element at address: 0x200000200000 with size: 0.834839 MiB 00:05:48.085 element at address: 0x20001d800000 with size: 0.568237 MiB 00:05:48.085 element at address: 0x20000d800000 with size: 0.489258 MiB 00:05:48.085 element at address: 0x200003e00000 with size: 0.488098 MiB 00:05:48.085 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:48.085 element at address: 0x200007000000 with size: 0.480469 MiB 00:05:48.085 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:05:48.085 element at address: 0x200003a00000 with size: 0.353027 MiB 00:05:48.085 list of standard malloc elements. size: 199.266602 MiB 00:05:48.085 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:48.085 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:48.085 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:48.085 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:48.085 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:48.085 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:48.085 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:48.085 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:48.085 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:48.085 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:48.085 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:48.085 element at address: 0x200003a5a600 with size: 0.000183 MiB 00:05:48.085 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:48.085 element at address: 0x200003a5eac0 with size: 0.000183 MiB 00:05:48.085 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:05:48.085 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:05:48.085 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:05:48.085 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:05:48.085 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:05:48.085 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:05:48.085 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:05:48.085 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:05:48.085 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:05:48.085 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:05:48.085 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:48.085 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:48.085 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:48.085 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20000707b000 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20000707b180 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20000707b240 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20000707b300 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20000707b480 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20000707b540 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:48.086 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:48.086 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d891780 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d891840 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d891900 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d892080 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d892140 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d892200 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d892380 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d892440 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d892500 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d892680 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d892740 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d892800 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d892980 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d893040 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d893100 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d893280 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d893340 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d893400 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d893580 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d893640 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d893700 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d893880 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d893940 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d894000 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d894180 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d894240 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d894300 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d894480 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d894540 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d894600 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d894780 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d894840 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d894900 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d895080 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d895140 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d895200 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:48.086 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:48.087 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:48.087 list of memzone associated elements. size: 646.796692 MiB 00:05:48.087 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:48.087 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:48.087 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:48.087 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:48.087 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:48.087 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70047_0 00:05:48.087 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:48.087 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70047_0 00:05:48.087 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:48.087 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70047_0 00:05:48.087 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:48.087 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70047_0 00:05:48.087 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:48.087 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:48.087 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:48.087 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:48.087 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:48.087 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70047 00:05:48.087 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:48.087 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70047 00:05:48.087 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:48.087 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70047 00:05:48.087 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:48.087 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:48.087 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:48.087 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:48.087 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:48.087 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:48.087 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:48.087 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:48.087 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:48.087 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70047 00:05:48.087 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:48.087 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70047 00:05:48.087 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:48.087 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70047 00:05:48.087 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:48.087 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70047 00:05:48.087 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:48.087 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70047 00:05:48.087 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:48.087 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70047 00:05:48.087 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:48.087 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:48.087 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:48.087 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:48.087 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:48.087 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:48.087 element at address: 0x200003a5eb80 with size: 0.125488 MiB 00:05:48.087 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70047 00:05:48.087 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:48.087 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:48.087 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:05:48.087 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:48.087 element at address: 0x200003a5a8c0 with size: 0.016113 MiB 00:05:48.087 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70047 00:05:48.087 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:05:48.087 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:48.087 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:48.087 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70047 00:05:48.087 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:48.087 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70047 00:05:48.087 element at address: 0x200003a5a6c0 with size: 0.000305 MiB 00:05:48.088 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70047 00:05:48.088 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:05:48.088 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:48.088 16:18:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:48.088 16:18:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70047 00:05:48.088 16:18:39 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70047 ']' 00:05:48.088 16:18:39 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70047 00:05:48.088 16:18:39 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:48.088 16:18:39 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.088 16:18:39 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70047 00:05:48.088 16:18:39 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.088 16:18:39 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.088 16:18:39 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70047' 00:05:48.088 killing process with pid 70047 00:05:48.088 16:18:39 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70047 00:05:48.088 16:18:39 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70047 00:05:48.347 00:05:48.347 real 0m1.698s 00:05:48.347 user 0m1.588s 00:05:48.347 sys 0m0.555s 00:05:48.347 16:18:40 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.347 16:18:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:48.347 ************************************ 00:05:48.347 END TEST dpdk_mem_utility 00:05:48.347 ************************************ 00:05:48.606 16:18:40 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:48.607 16:18:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.607 16:18:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.607 16:18:40 -- common/autotest_common.sh@10 -- # set +x 00:05:48.607 ************************************ 00:05:48.607 START TEST event 00:05:48.607 ************************************ 00:05:48.607 16:18:40 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:48.607 * Looking for test storage... 00:05:48.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:48.607 16:18:40 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:48.607 16:18:40 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:48.607 16:18:40 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:48.607 16:18:40 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:48.607 16:18:40 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.607 16:18:40 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.607 16:18:40 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.607 16:18:40 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.607 16:18:40 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.607 16:18:40 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.607 16:18:40 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.607 16:18:40 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.607 16:18:40 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.607 16:18:40 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.607 16:18:40 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.607 16:18:40 event -- scripts/common.sh@344 -- # case "$op" in 00:05:48.607 16:18:40 event -- scripts/common.sh@345 -- # : 1 00:05:48.607 16:18:40 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.607 16:18:40 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.607 16:18:40 event -- scripts/common.sh@365 -- # decimal 1 00:05:48.607 16:18:40 event -- scripts/common.sh@353 -- # local d=1 00:05:48.607 16:18:40 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.607 16:18:40 event -- scripts/common.sh@355 -- # echo 1 00:05:48.607 16:18:40 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.607 16:18:40 event -- scripts/common.sh@366 -- # decimal 2 00:05:48.607 16:18:40 event -- scripts/common.sh@353 -- # local d=2 00:05:48.607 16:18:40 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.607 16:18:40 event -- scripts/common.sh@355 -- # echo 2 00:05:48.607 16:18:40 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.607 16:18:40 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.607 16:18:40 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.607 16:18:40 event -- scripts/common.sh@368 -- # return 0 00:05:48.607 16:18:40 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.607 16:18:40 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:48.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.607 --rc genhtml_branch_coverage=1 00:05:48.607 --rc genhtml_function_coverage=1 00:05:48.607 --rc genhtml_legend=1 00:05:48.607 --rc geninfo_all_blocks=1 00:05:48.607 --rc geninfo_unexecuted_blocks=1 00:05:48.607 00:05:48.607 ' 00:05:48.607 16:18:40 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:48.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.607 --rc genhtml_branch_coverage=1 00:05:48.607 --rc genhtml_function_coverage=1 00:05:48.607 --rc genhtml_legend=1 00:05:48.607 --rc geninfo_all_blocks=1 00:05:48.607 --rc geninfo_unexecuted_blocks=1 00:05:48.607 00:05:48.607 ' 00:05:48.607 16:18:40 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:48.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.607 --rc genhtml_branch_coverage=1 00:05:48.607 --rc genhtml_function_coverage=1 00:05:48.607 --rc genhtml_legend=1 00:05:48.607 --rc geninfo_all_blocks=1 00:05:48.607 --rc geninfo_unexecuted_blocks=1 00:05:48.607 00:05:48.607 ' 00:05:48.607 16:18:40 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:48.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.607 --rc genhtml_branch_coverage=1 00:05:48.607 --rc genhtml_function_coverage=1 00:05:48.607 --rc genhtml_legend=1 00:05:48.607 --rc geninfo_all_blocks=1 00:05:48.607 --rc geninfo_unexecuted_blocks=1 00:05:48.607 00:05:48.607 ' 00:05:48.607 16:18:40 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:48.607 16:18:40 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:48.607 16:18:40 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:48.607 16:18:40 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:48.607 16:18:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.607 16:18:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.868 ************************************ 00:05:48.868 START TEST event_perf 00:05:48.868 ************************************ 00:05:48.868 16:18:40 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:48.868 Running I/O for 1 seconds...[2024-11-28 16:18:40.428065] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:48.868 [2024-11-28 16:18:40.428240] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70133 ] 00:05:48.868 [2024-11-28 16:18:40.587539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.868 [2024-11-28 16:18:40.633967] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.868 [2024-11-28 16:18:40.634221] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.868 [2024-11-28 16:18:40.634150] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.868 Running I/O for 1 seconds...[2024-11-28 16:18:40.634331] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.249 00:05:50.249 lcore 0: 72604 00:05:50.249 lcore 1: 72593 00:05:50.249 lcore 2: 72596 00:05:50.249 lcore 3: 72600 00:05:50.249 done. 00:05:50.249 00:05:50.249 real 0m1.350s 00:05:50.249 user 0m4.112s 00:05:50.249 sys 0m0.117s 00:05:50.249 ************************************ 00:05:50.249 END TEST event_perf 00:05:50.249 ************************************ 00:05:50.249 16:18:41 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.249 16:18:41 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.249 16:18:41 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:50.249 16:18:41 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:50.249 16:18:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.249 16:18:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.249 ************************************ 00:05:50.249 START TEST event_reactor 00:05:50.249 ************************************ 00:05:50.249 16:18:41 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:50.249 [2024-11-28 16:18:41.849392] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:50.249 [2024-11-28 16:18:41.849587] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70176 ] 00:05:50.249 [2024-11-28 16:18:42.013466] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.509 [2024-11-28 16:18:42.057993] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.449 test_start 00:05:51.449 oneshot 00:05:51.449 tick 100 00:05:51.449 tick 100 00:05:51.449 tick 250 00:05:51.449 tick 100 00:05:51.449 tick 100 00:05:51.449 tick 100 00:05:51.449 tick 250 00:05:51.449 tick 500 00:05:51.449 tick 100 00:05:51.449 tick 100 00:05:51.449 tick 250 00:05:51.449 tick 100 00:05:51.449 tick 100 00:05:51.449 test_end 00:05:51.449 ************************************ 00:05:51.449 END TEST event_reactor 00:05:51.449 ************************************ 00:05:51.449 00:05:51.449 real 0m1.348s 00:05:51.449 user 0m1.140s 00:05:51.449 sys 0m0.101s 00:05:51.449 16:18:43 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.449 16:18:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:51.449 16:18:43 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:51.449 16:18:43 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:51.449 16:18:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.449 16:18:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.710 ************************************ 00:05:51.710 START TEST event_reactor_perf 00:05:51.710 ************************************ 00:05:51.710 16:18:43 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:51.710 [2024-11-28 16:18:43.272324] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:51.710 [2024-11-28 16:18:43.272455] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70209 ] 00:05:51.710 [2024-11-28 16:18:43.438891] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.969 [2024-11-28 16:18:43.501144] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.909 test_start 00:05:52.909 test_end 00:05:52.909 Performance: 400714 events per second 00:05:52.909 00:05:52.909 real 0m1.367s 00:05:52.909 user 0m1.145s 00:05:52.909 sys 0m0.115s 00:05:52.909 16:18:44 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.909 ************************************ 00:05:52.909 END TEST event_reactor_perf 00:05:52.909 ************************************ 00:05:52.909 16:18:44 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:52.909 16:18:44 event -- event/event.sh@49 -- # uname -s 00:05:52.909 16:18:44 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:52.909 16:18:44 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:52.909 16:18:44 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.909 16:18:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.909 16:18:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.909 ************************************ 00:05:52.909 START TEST event_scheduler 00:05:52.909 ************************************ 00:05:52.909 16:18:44 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:53.169 * Looking for test storage... 00:05:53.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:53.169 16:18:44 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:53.169 16:18:44 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:53.169 16:18:44 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:53.169 16:18:44 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.169 16:18:44 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:53.169 16:18:44 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.169 16:18:44 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:53.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.169 --rc genhtml_branch_coverage=1 00:05:53.169 --rc genhtml_function_coverage=1 00:05:53.169 --rc genhtml_legend=1 00:05:53.169 --rc geninfo_all_blocks=1 00:05:53.169 --rc geninfo_unexecuted_blocks=1 00:05:53.169 00:05:53.169 ' 00:05:53.169 16:18:44 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:53.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.169 --rc genhtml_branch_coverage=1 00:05:53.169 --rc genhtml_function_coverage=1 00:05:53.169 --rc genhtml_legend=1 00:05:53.169 --rc geninfo_all_blocks=1 00:05:53.169 --rc geninfo_unexecuted_blocks=1 00:05:53.169 00:05:53.169 ' 00:05:53.169 16:18:44 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:53.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.169 --rc genhtml_branch_coverage=1 00:05:53.169 --rc genhtml_function_coverage=1 00:05:53.169 --rc genhtml_legend=1 00:05:53.169 --rc geninfo_all_blocks=1 00:05:53.170 --rc geninfo_unexecuted_blocks=1 00:05:53.170 00:05:53.170 ' 00:05:53.170 16:18:44 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:53.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.170 --rc genhtml_branch_coverage=1 00:05:53.170 --rc genhtml_function_coverage=1 00:05:53.170 --rc genhtml_legend=1 00:05:53.170 --rc geninfo_all_blocks=1 00:05:53.170 --rc geninfo_unexecuted_blocks=1 00:05:53.170 00:05:53.170 ' 00:05:53.170 16:18:44 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:53.170 16:18:44 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70280 00:05:53.170 16:18:44 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:53.170 16:18:44 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.170 16:18:44 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70280 00:05:53.170 16:18:44 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70280 ']' 00:05:53.170 16:18:44 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.170 16:18:44 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.170 16:18:44 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.170 16:18:44 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.170 16:18:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.430 [2024-11-28 16:18:44.979236] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:53.430 [2024-11-28 16:18:44.979415] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70280 ] 00:05:53.430 [2024-11-28 16:18:45.139578] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:53.689 [2024-11-28 16:18:45.217161] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.689 [2024-11-28 16:18:45.217362] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.689 [2024-11-28 16:18:45.217536] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.689 [2024-11-28 16:18:45.217421] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.259 16:18:45 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.259 16:18:45 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:54.259 16:18:45 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:54.259 16:18:45 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.259 16:18:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.259 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:54.259 POWER: Cannot set governor of lcore 0 to userspace 00:05:54.259 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:54.259 POWER: Cannot set governor of lcore 0 to performance 00:05:54.259 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:54.259 POWER: Cannot set governor of lcore 0 to userspace 00:05:54.259 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:54.259 POWER: Cannot set governor of lcore 0 to userspace 00:05:54.259 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:54.259 POWER: Unable to set Power Management Environment for lcore 0 00:05:54.259 [2024-11-28 16:18:45.810455] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:54.259 [2024-11-28 16:18:45.810505] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:54.259 [2024-11-28 16:18:45.810545] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:54.259 [2024-11-28 16:18:45.810588] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:54.259 [2024-11-28 16:18:45.810617] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:54.259 [2024-11-28 16:18:45.810656] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:54.259 16:18:45 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.259 16:18:45 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:54.259 16:18:45 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.259 16:18:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.259 [2024-11-28 16:18:45.936191] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:54.259 16:18:45 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.259 16:18:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:54.259 16:18:45 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.259 16:18:45 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.259 16:18:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.259 ************************************ 00:05:54.259 START TEST scheduler_create_thread 00:05:54.259 ************************************ 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.259 2 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.259 3 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.259 4 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.259 5 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.259 6 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.259 16:18:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.259 7 00:05:54.259 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.259 16:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:54.259 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.259 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.259 8 00:05:54.259 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.259 16:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:54.259 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.259 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.519 9 00:05:54.519 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.519 16:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:54.519 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.519 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.519 10 00:05:54.519 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.519 16:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:54.519 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.519 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.519 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.519 16:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:54.519 16:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:54.519 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.519 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.507 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.507 16:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:55.507 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.507 16:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.888 16:18:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.888 16:18:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:56.888 16:18:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:56.888 16:18:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.888 16:18:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.827 ************************************ 00:05:57.827 END TEST scheduler_create_thread 00:05:57.827 ************************************ 00:05:57.827 16:18:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.827 00:05:57.827 real 0m3.365s 00:05:57.827 user 0m0.026s 00:05:57.827 sys 0m0.010s 00:05:57.827 16:18:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.827 16:18:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.827 16:18:49 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:57.827 16:18:49 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70280 00:05:57.827 16:18:49 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70280 ']' 00:05:57.827 16:18:49 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70280 00:05:57.827 16:18:49 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:57.827 16:18:49 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.827 16:18:49 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70280 00:05:57.827 killing process with pid 70280 00:05:57.827 16:18:49 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:57.827 16:18:49 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:57.827 16:18:49 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70280' 00:05:57.827 16:18:49 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70280 00:05:57.827 16:18:49 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70280 00:05:58.087 [2024-11-28 16:18:49.694330] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:58.657 00:05:58.657 real 0m5.441s 00:05:58.657 user 0m10.501s 00:05:58.657 sys 0m0.585s 00:05:58.657 ************************************ 00:05:58.657 END TEST event_scheduler 00:05:58.657 ************************************ 00:05:58.657 16:18:50 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.657 16:18:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.657 16:18:50 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:58.657 16:18:50 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:58.657 16:18:50 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.657 16:18:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.657 16:18:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.657 ************************************ 00:05:58.657 START TEST app_repeat 00:05:58.657 ************************************ 00:05:58.657 16:18:50 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:58.657 16:18:50 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.657 16:18:50 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.657 16:18:50 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:58.657 16:18:50 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.657 16:18:50 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:58.657 16:18:50 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:58.657 16:18:50 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:58.657 16:18:50 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70390 00:05:58.657 16:18:50 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:58.657 16:18:50 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.657 Process app_repeat pid: 70390 00:05:58.657 16:18:50 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70390' 00:05:58.657 16:18:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:58.657 16:18:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:58.657 spdk_app_start Round 0 00:05:58.657 16:18:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70390 /var/tmp/spdk-nbd.sock 00:05:58.657 16:18:50 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70390 ']' 00:05:58.657 16:18:50 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.657 16:18:50 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.657 16:18:50 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.657 16:18:50 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.657 16:18:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.657 [2024-11-28 16:18:50.250825] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:58.657 [2024-11-28 16:18:50.250948] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70390 ] 00:05:58.657 [2024-11-28 16:18:50.411280] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.917 [2024-11-28 16:18:50.456230] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.917 [2024-11-28 16:18:50.456357] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.487 16:18:51 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.487 16:18:51 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:59.487 16:18:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.747 Malloc0 00:05:59.747 16:18:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.747 Malloc1 00:06:00.007 16:18:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.007 16:18:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.007 16:18:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.007 16:18:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.007 16:18:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.007 16:18:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.007 16:18:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.007 16:18:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.007 16:18:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.007 16:18:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.007 16:18:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.007 16:18:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.007 16:18:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.007 16:18:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.007 16:18:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.007 16:18:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.007 /dev/nbd0 00:06:00.007 16:18:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.007 16:18:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.007 16:18:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:00.007 16:18:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:00.007 16:18:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:00.007 16:18:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:00.007 16:18:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:00.007 16:18:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:00.007 16:18:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:00.007 16:18:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:00.007 16:18:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.007 1+0 records in 00:06:00.007 1+0 records out 00:06:00.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314858 s, 13.0 MB/s 00:06:00.007 16:18:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.266 16:18:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:00.266 16:18:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.266 16:18:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:00.266 16:18:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:00.266 16:18:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.266 16:18:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.266 16:18:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.266 /dev/nbd1 00:06:00.266 16:18:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.266 16:18:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.266 16:18:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:00.266 16:18:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:00.266 16:18:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:00.266 16:18:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:00.266 16:18:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:00.266 16:18:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:00.266 16:18:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:00.266 16:18:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:00.266 16:18:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.266 1+0 records in 00:06:00.266 1+0 records out 00:06:00.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323502 s, 12.7 MB/s 00:06:00.266 16:18:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.555 16:18:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:00.555 16:18:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.555 16:18:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:00.555 16:18:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:00.555 16:18:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.555 16:18:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.555 16:18:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.555 16:18:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.555 16:18:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.555 16:18:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.555 { 00:06:00.555 "nbd_device": "/dev/nbd0", 00:06:00.555 "bdev_name": "Malloc0" 00:06:00.555 }, 00:06:00.555 { 00:06:00.555 "nbd_device": "/dev/nbd1", 00:06:00.555 "bdev_name": "Malloc1" 00:06:00.555 } 00:06:00.555 ]' 00:06:00.555 16:18:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.555 { 00:06:00.555 "nbd_device": "/dev/nbd0", 00:06:00.555 "bdev_name": "Malloc0" 00:06:00.555 }, 00:06:00.555 { 00:06:00.555 "nbd_device": "/dev/nbd1", 00:06:00.555 "bdev_name": "Malloc1" 00:06:00.555 } 00:06:00.555 ]' 00:06:00.555 16:18:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.555 16:18:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.555 /dev/nbd1' 00:06:00.555 16:18:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.555 /dev/nbd1' 00:06:00.555 16:18:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.555 16:18:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.555 16:18:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.555 16:18:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.555 16:18:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.555 16:18:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.555 16:18:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.555 16:18:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.555 16:18:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.555 16:18:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.555 16:18:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.555 16:18:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.815 256+0 records in 00:06:00.815 256+0 records out 00:06:00.815 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122662 s, 85.5 MB/s 00:06:00.815 16:18:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.815 16:18:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.815 256+0 records in 00:06:00.815 256+0 records out 00:06:00.815 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02302 s, 45.6 MB/s 00:06:00.815 16:18:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.815 16:18:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.815 256+0 records in 00:06:00.815 256+0 records out 00:06:00.815 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246054 s, 42.6 MB/s 00:06:00.815 16:18:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.815 16:18:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.815 16:18:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.815 16:18:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.815 16:18:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.815 16:18:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.815 16:18:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.815 16:18:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.815 16:18:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.815 16:18:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.815 16:18:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.816 16:18:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.816 16:18:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.816 16:18:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.816 16:18:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.816 16:18:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.816 16:18:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.816 16:18:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.816 16:18:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.075 16:18:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.075 16:18:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.075 16:18:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.075 16:18:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.075 16:18:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.075 16:18:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.075 16:18:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.075 16:18:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.075 16:18:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.075 16:18:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.336 16:18:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.336 16:18:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.336 16:18:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.336 16:18:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.336 16:18:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.336 16:18:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.336 16:18:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.336 16:18:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.336 16:18:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.336 16:18:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.336 16:18:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.336 16:18:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.336 16:18:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.336 16:18:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.336 16:18:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.596 16:18:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.596 16:18:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.596 16:18:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.596 16:18:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.596 16:18:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.596 16:18:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.596 16:18:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.596 16:18:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.596 16:18:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.596 16:18:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:01.855 [2024-11-28 16:18:53.495502] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.855 [2024-11-28 16:18:53.536693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.856 [2024-11-28 16:18:53.536695] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.856 [2024-11-28 16:18:53.578271] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.856 [2024-11-28 16:18:53.578338] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.149 spdk_app_start Round 1 00:06:05.149 16:18:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:05.149 16:18:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:05.149 16:18:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70390 /var/tmp/spdk-nbd.sock 00:06:05.149 16:18:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70390 ']' 00:06:05.149 16:18:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.149 16:18:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.149 16:18:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.149 16:18:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.150 16:18:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.150 16:18:56 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.150 16:18:56 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:05.150 16:18:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.150 Malloc0 00:06:05.150 16:18:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.150 Malloc1 00:06:05.150 16:18:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.150 16:18:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.150 16:18:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.150 16:18:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.150 16:18:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.150 16:18:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.150 16:18:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.150 16:18:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.150 16:18:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.150 16:18:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.150 16:18:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.150 16:18:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.150 16:18:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:05.150 16:18:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.150 16:18:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.150 16:18:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.410 /dev/nbd0 00:06:05.410 16:18:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.410 16:18:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.410 16:18:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:05.410 16:18:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:05.410 16:18:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:05.410 16:18:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:05.410 16:18:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:05.410 16:18:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:05.410 16:18:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:05.410 16:18:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:05.410 16:18:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.410 1+0 records in 00:06:05.410 1+0 records out 00:06:05.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215527 s, 19.0 MB/s 00:06:05.410 16:18:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.410 16:18:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:05.410 16:18:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.410 16:18:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:05.410 16:18:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:05.410 16:18:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.410 16:18:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.410 16:18:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.669 /dev/nbd1 00:06:05.669 16:18:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.669 16:18:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.669 16:18:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:05.669 16:18:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:05.669 16:18:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:05.669 16:18:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:05.669 16:18:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:05.669 16:18:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:05.669 16:18:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:05.669 16:18:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:05.669 16:18:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.669 1+0 records in 00:06:05.669 1+0 records out 00:06:05.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435632 s, 9.4 MB/s 00:06:05.669 16:18:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.669 16:18:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:05.669 16:18:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.669 16:18:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:05.669 16:18:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:05.669 16:18:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.669 16:18:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.669 16:18:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.669 16:18:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.669 16:18:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.929 16:18:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.929 { 00:06:05.929 "nbd_device": "/dev/nbd0", 00:06:05.929 "bdev_name": "Malloc0" 00:06:05.929 }, 00:06:05.929 { 00:06:05.929 "nbd_device": "/dev/nbd1", 00:06:05.929 "bdev_name": "Malloc1" 00:06:05.929 } 00:06:05.929 ]' 00:06:05.929 16:18:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.929 { 00:06:05.929 "nbd_device": "/dev/nbd0", 00:06:05.929 "bdev_name": "Malloc0" 00:06:05.929 }, 00:06:05.929 { 00:06:05.929 "nbd_device": "/dev/nbd1", 00:06:05.929 "bdev_name": "Malloc1" 00:06:05.929 } 00:06:05.929 ]' 00:06:05.929 16:18:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.929 16:18:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.929 /dev/nbd1' 00:06:05.929 16:18:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.929 /dev/nbd1' 00:06:05.929 16:18:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.929 16:18:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.929 16:18:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.929 16:18:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.929 16:18:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.929 16:18:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.929 16:18:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.929 16:18:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.929 16:18:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.929 16:18:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.929 16:18:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.929 16:18:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.929 256+0 records in 00:06:05.929 256+0 records out 00:06:05.929 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128286 s, 81.7 MB/s 00:06:05.929 16:18:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.929 16:18:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.929 256+0 records in 00:06:05.929 256+0 records out 00:06:05.929 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231138 s, 45.4 MB/s 00:06:05.929 16:18:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.929 16:18:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.929 256+0 records in 00:06:05.930 256+0 records out 00:06:05.930 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023821 s, 44.0 MB/s 00:06:05.930 16:18:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.930 16:18:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.930 16:18:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.930 16:18:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.930 16:18:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.930 16:18:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.930 16:18:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.930 16:18:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.930 16:18:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.190 16:18:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.190 16:18:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.190 16:18:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.190 16:18:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.190 16:18:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.190 16:18:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.190 16:18:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.190 16:18:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:06.190 16:18:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.190 16:18:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.190 16:18:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.190 16:18:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.190 16:18:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.190 16:18:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.190 16:18:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.190 16:18:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.190 16:18:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.190 16:18:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.190 16:18:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.190 16:18:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.450 16:18:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.450 16:18:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.450 16:18:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.450 16:18:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.450 16:18:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.450 16:18:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.450 16:18:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.450 16:18:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.450 16:18:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.450 16:18:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.450 16:18:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.709 16:18:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.709 16:18:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.709 16:18:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.709 16:18:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.709 16:18:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.709 16:18:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.709 16:18:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.709 16:18:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.709 16:18:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.709 16:18:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.709 16:18:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.709 16:18:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.709 16:18:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.969 16:18:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:07.229 [2024-11-28 16:18:58.811454] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.229 [2024-11-28 16:18:58.852381] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.229 [2024-11-28 16:18:58.852422] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.229 [2024-11-28 16:18:58.894007] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.229 [2024-11-28 16:18:58.894065] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.559 spdk_app_start Round 2 00:06:10.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.559 16:19:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:10.559 16:19:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:10.559 16:19:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70390 /var/tmp/spdk-nbd.sock 00:06:10.559 16:19:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70390 ']' 00:06:10.559 16:19:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.559 16:19:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.559 16:19:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.559 16:19:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.559 16:19:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.559 16:19:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.559 16:19:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:10.559 16:19:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.559 Malloc0 00:06:10.559 16:19:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.559 Malloc1 00:06:10.559 16:19:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.559 16:19:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.559 16:19:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.559 16:19:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.559 16:19:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.559 16:19:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.559 16:19:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.559 16:19:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.559 16:19:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.559 16:19:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.559 16:19:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.559 16:19:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.559 16:19:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:10.559 16:19:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.559 16:19:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.559 16:19:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.819 /dev/nbd0 00:06:10.819 16:19:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.819 16:19:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.819 16:19:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:10.820 16:19:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:10.820 16:19:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:10.820 16:19:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:10.820 16:19:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:10.820 16:19:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:10.820 16:19:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:10.820 16:19:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:10.820 16:19:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.820 1+0 records in 00:06:10.820 1+0 records out 00:06:10.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260579 s, 15.7 MB/s 00:06:10.820 16:19:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.820 16:19:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:10.820 16:19:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.820 16:19:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:10.820 16:19:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:10.820 16:19:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.820 16:19:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.820 16:19:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.079 /dev/nbd1 00:06:11.079 16:19:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:11.079 16:19:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:11.079 16:19:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:11.079 16:19:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:11.080 16:19:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:11.080 16:19:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:11.080 16:19:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:11.080 16:19:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:11.080 16:19:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:11.080 16:19:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:11.080 16:19:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.080 1+0 records in 00:06:11.080 1+0 records out 00:06:11.080 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207264 s, 19.8 MB/s 00:06:11.080 16:19:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.080 16:19:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:11.080 16:19:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.080 16:19:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:11.080 16:19:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:11.080 16:19:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.080 16:19:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.080 16:19:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.080 16:19:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.080 16:19:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.339 16:19:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.339 { 00:06:11.339 "nbd_device": "/dev/nbd0", 00:06:11.339 "bdev_name": "Malloc0" 00:06:11.339 }, 00:06:11.340 { 00:06:11.340 "nbd_device": "/dev/nbd1", 00:06:11.340 "bdev_name": "Malloc1" 00:06:11.340 } 00:06:11.340 ]' 00:06:11.340 16:19:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.340 { 00:06:11.340 "nbd_device": "/dev/nbd0", 00:06:11.340 "bdev_name": "Malloc0" 00:06:11.340 }, 00:06:11.340 { 00:06:11.340 "nbd_device": "/dev/nbd1", 00:06:11.340 "bdev_name": "Malloc1" 00:06:11.340 } 00:06:11.340 ]' 00:06:11.340 16:19:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.340 16:19:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.340 /dev/nbd1' 00:06:11.340 16:19:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.340 /dev/nbd1' 00:06:11.340 16:19:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.340 256+0 records in 00:06:11.340 256+0 records out 00:06:11.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00431244 s, 243 MB/s 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.340 256+0 records in 00:06:11.340 256+0 records out 00:06:11.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218684 s, 47.9 MB/s 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.340 256+0 records in 00:06:11.340 256+0 records out 00:06:11.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242386 s, 43.3 MB/s 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.340 16:19:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.600 16:19:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.600 16:19:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.600 16:19:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.600 16:19:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.600 16:19:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.600 16:19:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.600 16:19:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.600 16:19:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.600 16:19:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.600 16:19:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.860 16:19:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.860 16:19:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.860 16:19:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.860 16:19:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.860 16:19:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.860 16:19:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.860 16:19:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.860 16:19:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.860 16:19:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.860 16:19:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.860 16:19:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.118 16:19:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.118 16:19:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:12.118 16:19:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.118 16:19:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.118 16:19:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.118 16:19:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.118 16:19:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:12.118 16:19:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.118 16:19:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.118 16:19:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:12.118 16:19:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:12.118 16:19:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:12.118 16:19:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.377 16:19:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:12.378 [2024-11-28 16:19:04.107006] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.378 [2024-11-28 16:19:04.147133] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.378 [2024-11-28 16:19:04.147135] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.637 [2024-11-28 16:19:04.187949] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.637 [2024-11-28 16:19:04.188010] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.931 16:19:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70390 /var/tmp/spdk-nbd.sock 00:06:15.931 16:19:06 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70390 ']' 00:06:15.931 16:19:06 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.931 16:19:06 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.931 16:19:06 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.931 16:19:06 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.931 16:19:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.931 16:19:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.931 16:19:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:15.931 16:19:07 event.app_repeat -- event/event.sh@39 -- # killprocess 70390 00:06:15.931 16:19:07 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70390 ']' 00:06:15.931 16:19:07 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70390 00:06:15.931 16:19:07 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:15.931 16:19:07 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.931 16:19:07 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70390 00:06:15.931 killing process with pid 70390 00:06:15.931 16:19:07 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.931 16:19:07 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.931 16:19:07 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70390' 00:06:15.931 16:19:07 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70390 00:06:15.931 16:19:07 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70390 00:06:15.931 spdk_app_start is called in Round 0. 00:06:15.931 Shutdown signal received, stop current app iteration 00:06:15.931 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:15.931 spdk_app_start is called in Round 1. 00:06:15.931 Shutdown signal received, stop current app iteration 00:06:15.931 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:15.931 spdk_app_start is called in Round 2. 00:06:15.931 Shutdown signal received, stop current app iteration 00:06:15.931 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:15.931 spdk_app_start is called in Round 3. 00:06:15.931 Shutdown signal received, stop current app iteration 00:06:15.931 ************************************ 00:06:15.931 END TEST app_repeat 00:06:15.931 ************************************ 00:06:15.931 16:19:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:15.931 16:19:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:15.931 00:06:15.931 real 0m17.209s 00:06:15.931 user 0m37.955s 00:06:15.931 sys 0m2.369s 00:06:15.931 16:19:07 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.931 16:19:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.931 16:19:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:15.931 16:19:07 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:15.931 16:19:07 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.931 16:19:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.931 16:19:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.931 ************************************ 00:06:15.931 START TEST cpu_locks 00:06:15.931 ************************************ 00:06:15.931 16:19:07 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:15.931 * Looking for test storage... 00:06:15.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:15.931 16:19:07 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:15.931 16:19:07 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:15.931 16:19:07 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:15.931 16:19:07 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.931 16:19:07 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:15.931 16:19:07 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.931 16:19:07 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:15.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.931 --rc genhtml_branch_coverage=1 00:06:15.931 --rc genhtml_function_coverage=1 00:06:15.931 --rc genhtml_legend=1 00:06:15.931 --rc geninfo_all_blocks=1 00:06:15.931 --rc geninfo_unexecuted_blocks=1 00:06:15.931 00:06:15.931 ' 00:06:15.931 16:19:07 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:15.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.931 --rc genhtml_branch_coverage=1 00:06:15.931 --rc genhtml_function_coverage=1 00:06:15.931 --rc genhtml_legend=1 00:06:15.931 --rc geninfo_all_blocks=1 00:06:15.931 --rc geninfo_unexecuted_blocks=1 00:06:15.931 00:06:15.931 ' 00:06:15.931 16:19:07 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:15.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.931 --rc genhtml_branch_coverage=1 00:06:15.931 --rc genhtml_function_coverage=1 00:06:15.931 --rc genhtml_legend=1 00:06:15.931 --rc geninfo_all_blocks=1 00:06:15.931 --rc geninfo_unexecuted_blocks=1 00:06:15.931 00:06:15.931 ' 00:06:15.931 16:19:07 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:15.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.931 --rc genhtml_branch_coverage=1 00:06:15.931 --rc genhtml_function_coverage=1 00:06:15.931 --rc genhtml_legend=1 00:06:15.931 --rc geninfo_all_blocks=1 00:06:15.931 --rc geninfo_unexecuted_blocks=1 00:06:15.931 00:06:15.931 ' 00:06:15.931 16:19:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:15.931 16:19:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:15.931 16:19:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:15.931 16:19:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:15.931 16:19:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.931 16:19:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.931 16:19:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.191 ************************************ 00:06:16.191 START TEST default_locks 00:06:16.191 ************************************ 00:06:16.191 16:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:16.191 16:19:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70811 00:06:16.191 16:19:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.191 16:19:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70811 00:06:16.191 16:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70811 ']' 00:06:16.191 16:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.191 16:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.191 16:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.191 16:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.191 16:19:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.191 [2024-11-28 16:19:07.794511] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:16.191 [2024-11-28 16:19:07.794648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70811 ] 00:06:16.191 [2024-11-28 16:19:07.954373] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.451 [2024-11-28 16:19:07.997933] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.021 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.021 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:17.021 16:19:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70811 00:06:17.021 16:19:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70811 00:06:17.021 16:19:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.280 16:19:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70811 00:06:17.280 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 70811 ']' 00:06:17.280 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 70811 00:06:17.280 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:17.280 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.280 16:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70811 00:06:17.280 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.280 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.280 killing process with pid 70811 00:06:17.280 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70811' 00:06:17.280 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 70811 00:06:17.280 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 70811 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70811 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70811 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 70811 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70811 ']' 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.850 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70811) - No such process 00:06:17.850 ERROR: process (pid: 70811) is no longer running 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:17.850 00:06:17.850 real 0m1.699s 00:06:17.850 user 0m1.651s 00:06:17.850 sys 0m0.572s 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.850 16:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.850 ************************************ 00:06:17.850 END TEST default_locks 00:06:17.850 ************************************ 00:06:17.850 16:19:09 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:17.850 16:19:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.850 16:19:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.850 16:19:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.850 ************************************ 00:06:17.850 START TEST default_locks_via_rpc 00:06:17.850 ************************************ 00:06:17.850 16:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:17.850 16:19:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70858 00:06:17.850 16:19:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.850 16:19:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70858 00:06:17.850 16:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70858 ']' 00:06:17.850 16:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.850 16:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.850 16:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.850 16:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.850 16:19:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.850 [2024-11-28 16:19:09.554605] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:17.850 [2024-11-28 16:19:09.554729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70858 ] 00:06:18.110 [2024-11-28 16:19:09.716652] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.110 [2024-11-28 16:19:09.759869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.678 16:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.678 16:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:18.678 16:19:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:18.678 16:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.678 16:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.678 16:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.678 16:19:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:18.678 16:19:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:18.678 16:19:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:18.678 16:19:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:18.678 16:19:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:18.678 16:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.678 16:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.678 16:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.678 16:19:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70858 00:06:18.679 16:19:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70858 00:06:18.679 16:19:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.937 16:19:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70858 00:06:18.938 16:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 70858 ']' 00:06:18.938 16:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 70858 00:06:18.938 16:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:18.938 16:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.938 16:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70858 00:06:19.198 16:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.198 16:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.198 killing process with pid 70858 00:06:19.198 16:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70858' 00:06:19.198 16:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 70858 00:06:19.198 16:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 70858 00:06:19.458 00:06:19.458 real 0m1.619s 00:06:19.458 user 0m1.579s 00:06:19.458 sys 0m0.559s 00:06:19.458 16:19:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.458 16:19:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.458 ************************************ 00:06:19.458 END TEST default_locks_via_rpc 00:06:19.458 ************************************ 00:06:19.458 16:19:11 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:19.458 16:19:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.458 16:19:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.458 16:19:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.458 ************************************ 00:06:19.458 START TEST non_locking_app_on_locked_coremask 00:06:19.458 ************************************ 00:06:19.458 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:19.458 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70905 00:06:19.458 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.458 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70905 /var/tmp/spdk.sock 00:06:19.458 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70905 ']' 00:06:19.458 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.458 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.458 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.458 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.458 16:19:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.717 [2024-11-28 16:19:11.242762] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:19.717 [2024-11-28 16:19:11.242913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70905 ] 00:06:19.717 [2024-11-28 16:19:11.403269] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.717 [2024-11-28 16:19:11.446667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.287 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.287 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:20.287 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70921 00:06:20.287 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:20.287 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70921 /var/tmp/spdk2.sock 00:06:20.287 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70921 ']' 00:06:20.287 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.287 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.287 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.287 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.287 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.547 [2024-11-28 16:19:12.141176] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:20.547 [2024-11-28 16:19:12.141307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70921 ] 00:06:20.547 [2024-11-28 16:19:12.292068] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.547 [2024-11-28 16:19:12.292124] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.807 [2024-11-28 16:19:12.387586] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.377 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.377 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:21.377 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70905 00:06:21.377 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70905 00:06:21.377 16:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.948 16:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70905 00:06:21.948 16:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70905 ']' 00:06:21.948 16:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70905 00:06:21.948 16:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:21.948 16:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:21.948 16:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70905 00:06:21.948 16:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:21.948 16:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:21.948 killing process with pid 70905 00:06:21.948 16:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70905' 00:06:21.948 16:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70905 00:06:21.948 16:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70905 00:06:22.887 16:19:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70921 00:06:22.887 16:19:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70921 ']' 00:06:22.887 16:19:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70921 00:06:22.887 16:19:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:22.887 16:19:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.887 16:19:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70921 00:06:22.887 16:19:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.887 16:19:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.887 killing process with pid 70921 00:06:22.887 16:19:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70921' 00:06:22.887 16:19:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70921 00:06:22.887 16:19:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70921 00:06:23.147 00:06:23.147 real 0m3.603s 00:06:23.147 user 0m3.782s 00:06:23.147 sys 0m1.099s 00:06:23.147 16:19:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.147 16:19:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.147 ************************************ 00:06:23.147 END TEST non_locking_app_on_locked_coremask 00:06:23.147 ************************************ 00:06:23.147 16:19:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:23.147 16:19:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.147 16:19:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.147 16:19:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.147 ************************************ 00:06:23.147 START TEST locking_app_on_unlocked_coremask 00:06:23.147 ************************************ 00:06:23.147 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:23.147 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70992 00:06:23.147 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:23.147 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70992 /var/tmp/spdk.sock 00:06:23.147 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70992 ']' 00:06:23.147 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.147 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.147 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.147 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.147 16:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.407 [2024-11-28 16:19:14.920482] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:23.407 [2024-11-28 16:19:14.920604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70992 ] 00:06:23.407 [2024-11-28 16:19:15.081154] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.407 [2024-11-28 16:19:15.081228] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.407 [2024-11-28 16:19:15.125764] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.977 16:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.977 16:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:23.977 16:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:23.977 16:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71002 00:06:23.977 16:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71002 /var/tmp/spdk2.sock 00:06:23.977 16:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71002 ']' 00:06:23.977 16:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.977 16:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.977 16:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.977 16:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.977 16:19:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.268 [2024-11-28 16:19:15.797335] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:24.268 [2024-11-28 16:19:15.797448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71002 ] 00:06:24.269 [2024-11-28 16:19:15.947783] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.538 [2024-11-28 16:19:16.041276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.123 16:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.123 16:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:25.123 16:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71002 00:06:25.123 16:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71002 00:06:25.123 16:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.384 16:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70992 00:06:25.384 16:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70992 ']' 00:06:25.384 16:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70992 00:06:25.384 16:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:25.384 16:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.384 16:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70992 00:06:25.647 16:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.647 16:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.647 killing process with pid 70992 00:06:25.647 16:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70992' 00:06:25.647 16:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70992 00:06:25.647 16:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70992 00:06:26.216 16:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71002 00:06:26.216 16:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71002 ']' 00:06:26.216 16:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71002 00:06:26.216 16:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:26.216 16:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:26.216 16:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71002 00:06:26.216 16:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:26.216 16:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:26.216 killing process with pid 71002 00:06:26.216 16:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71002' 00:06:26.216 16:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71002 00:06:26.216 16:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71002 00:06:26.784 00:06:26.784 real 0m3.541s 00:06:26.784 user 0m3.676s 00:06:26.784 sys 0m1.097s 00:06:26.784 16:19:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.784 16:19:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.784 ************************************ 00:06:26.784 END TEST locking_app_on_unlocked_coremask 00:06:26.784 ************************************ 00:06:26.784 16:19:18 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:26.784 16:19:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.784 16:19:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.784 16:19:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.785 ************************************ 00:06:26.785 START TEST locking_app_on_locked_coremask 00:06:26.785 ************************************ 00:06:26.785 16:19:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:26.785 16:19:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71066 00:06:26.785 16:19:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.785 16:19:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71066 /var/tmp/spdk.sock 00:06:26.785 16:19:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71066 ']' 00:06:26.785 16:19:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.785 16:19:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.785 16:19:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.785 16:19:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.785 16:19:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.785 [2024-11-28 16:19:18.525378] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:26.785 [2024-11-28 16:19:18.525516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71066 ] 00:06:27.044 [2024-11-28 16:19:18.683945] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.044 [2024-11-28 16:19:18.728286] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.615 16:19:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.615 16:19:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:27.615 16:19:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:27.615 16:19:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71082 00:06:27.615 16:19:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71082 /var/tmp/spdk2.sock 00:06:27.615 16:19:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:27.615 16:19:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71082 /var/tmp/spdk2.sock 00:06:27.615 16:19:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:27.615 16:19:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.615 16:19:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:27.615 16:19:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.615 16:19:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71082 /var/tmp/spdk2.sock 00:06:27.615 16:19:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71082 ']' 00:06:27.615 16:19:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.615 16:19:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.615 16:19:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.615 16:19:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.615 16:19:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.875 [2024-11-28 16:19:19.402730] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:27.875 [2024-11-28 16:19:19.402850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71082 ] 00:06:27.875 [2024-11-28 16:19:19.550589] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71066 has claimed it. 00:06:27.875 [2024-11-28 16:19:19.550658] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:28.445 ERROR: process (pid: 71082) is no longer running 00:06:28.445 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71082) - No such process 00:06:28.446 16:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.446 16:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:28.446 16:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:28.446 16:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:28.446 16:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:28.446 16:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:28.446 16:19:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71066 00:06:28.446 16:19:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71066 00:06:28.446 16:19:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.706 16:19:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71066 00:06:28.706 16:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71066 ']' 00:06:28.706 16:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71066 00:06:28.706 16:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:28.706 16:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.706 16:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71066 00:06:28.706 16:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:28.706 16:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:28.706 killing process with pid 71066 00:06:28.706 16:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71066' 00:06:28.706 16:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71066 00:06:28.706 16:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71066 00:06:29.275 00:06:29.275 real 0m2.318s 00:06:29.275 user 0m2.492s 00:06:29.275 sys 0m0.654s 00:06:29.276 16:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.276 16:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.276 ************************************ 00:06:29.276 END TEST locking_app_on_locked_coremask 00:06:29.276 ************************************ 00:06:29.276 16:19:20 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:29.276 16:19:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.276 16:19:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.276 16:19:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.276 ************************************ 00:06:29.276 START TEST locking_overlapped_coremask 00:06:29.276 ************************************ 00:06:29.276 16:19:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:29.276 16:19:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71129 00:06:29.276 16:19:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:29.276 16:19:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71129 /var/tmp/spdk.sock 00:06:29.276 16:19:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71129 ']' 00:06:29.276 16:19:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.276 16:19:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.276 16:19:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.276 16:19:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.276 16:19:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.276 [2024-11-28 16:19:20.909881] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:29.276 [2024-11-28 16:19:20.910297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71129 ] 00:06:29.535 [2024-11-28 16:19:21.070106] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.536 [2024-11-28 16:19:21.116003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.536 [2024-11-28 16:19:21.116069] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.536 [2024-11-28 16:19:21.116216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.106 16:19:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.106 16:19:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:30.106 16:19:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:30.106 16:19:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71142 00:06:30.106 16:19:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71142 /var/tmp/spdk2.sock 00:06:30.106 16:19:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:30.106 16:19:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71142 /var/tmp/spdk2.sock 00:06:30.106 16:19:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:30.106 16:19:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.106 16:19:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:30.106 16:19:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.106 16:19:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71142 /var/tmp/spdk2.sock 00:06:30.106 16:19:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71142 ']' 00:06:30.106 16:19:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.106 16:19:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.106 16:19:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.106 16:19:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.106 16:19:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.106 [2024-11-28 16:19:21.788064] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:30.106 [2024-11-28 16:19:21.788184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71142 ] 00:06:30.365 [2024-11-28 16:19:21.938150] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71129 has claimed it. 00:06:30.365 [2024-11-28 16:19:21.938228] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:30.930 ERROR: process (pid: 71142) is no longer running 00:06:30.930 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71142) - No such process 00:06:30.930 16:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.930 16:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:30.930 16:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:30.930 16:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:30.930 16:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:30.930 16:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:30.930 16:19:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:30.930 16:19:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:30.930 16:19:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:30.930 16:19:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:30.930 16:19:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71129 00:06:30.930 16:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71129 ']' 00:06:30.930 16:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71129 00:06:30.930 16:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:30.930 16:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.930 16:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71129 00:06:30.930 16:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.930 16:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.930 16:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71129' 00:06:30.930 killing process with pid 71129 00:06:30.930 16:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71129 00:06:30.930 16:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71129 00:06:31.189 00:06:31.189 real 0m2.051s 00:06:31.189 user 0m5.385s 00:06:31.189 sys 0m0.520s 00:06:31.189 16:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.189 16:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.189 ************************************ 00:06:31.189 END TEST locking_overlapped_coremask 00:06:31.189 ************************************ 00:06:31.189 16:19:22 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:31.189 16:19:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.189 16:19:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.189 16:19:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.189 ************************************ 00:06:31.189 START TEST locking_overlapped_coremask_via_rpc 00:06:31.189 ************************************ 00:06:31.189 16:19:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:31.189 16:19:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71189 00:06:31.189 16:19:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71189 /var/tmp/spdk.sock 00:06:31.189 16:19:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:31.189 16:19:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71189 ']' 00:06:31.189 16:19:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.189 16:19:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.189 16:19:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.189 16:19:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.189 16:19:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.449 [2024-11-28 16:19:23.033036] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:31.449 [2024-11-28 16:19:23.033195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71189 ] 00:06:31.449 [2024-11-28 16:19:23.194257] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.449 [2024-11-28 16:19:23.194598] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.709 [2024-11-28 16:19:23.240398] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.709 [2024-11-28 16:19:23.240511] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.709 [2024-11-28 16:19:23.240664] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.278 16:19:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.278 16:19:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:32.278 16:19:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71202 00:06:32.278 16:19:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71202 /var/tmp/spdk2.sock 00:06:32.278 16:19:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:32.278 16:19:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71202 ']' 00:06:32.278 16:19:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.278 16:19:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.278 16:19:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.278 16:19:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.278 16:19:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.278 [2024-11-28 16:19:23.932440] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:32.278 [2024-11-28 16:19:23.932555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71202 ] 00:06:32.537 [2024-11-28 16:19:24.082602] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.537 [2024-11-28 16:19:24.082670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.537 [2024-11-28 16:19:24.176783] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.537 [2024-11-28 16:19:24.180052] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.537 [2024-11-28 16:19:24.180184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.107 [2024-11-28 16:19:24.769061] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71189 has claimed it. 00:06:33.107 request: 00:06:33.107 { 00:06:33.107 "method": "framework_enable_cpumask_locks", 00:06:33.107 "req_id": 1 00:06:33.107 } 00:06:33.107 Got JSON-RPC error response 00:06:33.107 response: 00:06:33.107 { 00:06:33.107 "code": -32603, 00:06:33.107 "message": "Failed to claim CPU core: 2" 00:06:33.107 } 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71189 /var/tmp/spdk.sock 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71189 ']' 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.107 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.366 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.366 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:33.366 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71202 /var/tmp/spdk2.sock 00:06:33.366 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71202 ']' 00:06:33.366 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.366 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.366 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.366 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.366 16:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.625 16:19:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.625 16:19:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:33.625 16:19:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:33.625 16:19:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:33.625 16:19:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:33.625 16:19:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:33.625 00:06:33.625 real 0m2.248s 00:06:33.625 user 0m1.023s 00:06:33.625 sys 0m0.160s 00:06:33.625 16:19:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.625 16:19:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.625 ************************************ 00:06:33.625 END TEST locking_overlapped_coremask_via_rpc 00:06:33.625 ************************************ 00:06:33.625 16:19:25 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:33.625 16:19:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71189 ]] 00:06:33.625 16:19:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71189 00:06:33.625 16:19:25 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71189 ']' 00:06:33.625 16:19:25 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71189 00:06:33.625 16:19:25 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:33.625 16:19:25 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.625 16:19:25 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71189 00:06:33.625 16:19:25 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.625 16:19:25 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.625 killing process with pid 71189 00:06:33.625 16:19:25 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71189' 00:06:33.625 16:19:25 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71189 00:06:33.625 16:19:25 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71189 00:06:34.194 16:19:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71202 ]] 00:06:34.194 16:19:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71202 00:06:34.194 16:19:25 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71202 ']' 00:06:34.194 16:19:25 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71202 00:06:34.194 16:19:25 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:34.194 16:19:25 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.194 16:19:25 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71202 00:06:34.194 16:19:25 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:34.194 16:19:25 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:34.194 killing process with pid 71202 00:06:34.194 16:19:25 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71202' 00:06:34.194 16:19:25 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71202 00:06:34.194 16:19:25 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71202 00:06:34.454 16:19:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:34.454 16:19:26 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:34.454 16:19:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71189 ]] 00:06:34.454 16:19:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71189 00:06:34.454 16:19:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71189 ']' 00:06:34.454 16:19:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71189 00:06:34.454 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71189) - No such process 00:06:34.454 Process with pid 71189 is not found 00:06:34.454 16:19:26 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71189 is not found' 00:06:34.454 16:19:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71202 ]] 00:06:34.454 16:19:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71202 00:06:34.454 16:19:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71202 ']' 00:06:34.454 16:19:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71202 00:06:34.454 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71202) - No such process 00:06:34.454 Process with pid 71202 is not found 00:06:34.454 16:19:26 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71202 is not found' 00:06:34.454 16:19:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:34.454 00:06:34.454 real 0m18.664s 00:06:34.454 user 0m30.735s 00:06:34.454 sys 0m5.820s 00:06:34.454 16:19:26 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.454 16:19:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.454 ************************************ 00:06:34.454 END TEST cpu_locks 00:06:34.454 ************************************ 00:06:34.454 00:06:34.454 real 0m46.045s 00:06:34.454 user 1m25.818s 00:06:34.454 sys 0m9.548s 00:06:34.454 16:19:26 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.454 16:19:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.454 ************************************ 00:06:34.454 END TEST event 00:06:34.454 ************************************ 00:06:34.714 16:19:26 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:34.714 16:19:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.714 16:19:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.714 16:19:26 -- common/autotest_common.sh@10 -- # set +x 00:06:34.714 ************************************ 00:06:34.714 START TEST thread 00:06:34.714 ************************************ 00:06:34.714 16:19:26 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:34.714 * Looking for test storage... 00:06:34.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:34.714 16:19:26 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:34.714 16:19:26 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:34.714 16:19:26 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:34.714 16:19:26 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:34.714 16:19:26 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.714 16:19:26 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.714 16:19:26 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.714 16:19:26 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.714 16:19:26 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.714 16:19:26 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.714 16:19:26 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.714 16:19:26 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.714 16:19:26 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.714 16:19:26 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.714 16:19:26 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.714 16:19:26 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:34.714 16:19:26 thread -- scripts/common.sh@345 -- # : 1 00:06:34.714 16:19:26 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.714 16:19:26 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.714 16:19:26 thread -- scripts/common.sh@365 -- # decimal 1 00:06:34.714 16:19:26 thread -- scripts/common.sh@353 -- # local d=1 00:06:34.714 16:19:26 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.714 16:19:26 thread -- scripts/common.sh@355 -- # echo 1 00:06:34.714 16:19:26 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.714 16:19:26 thread -- scripts/common.sh@366 -- # decimal 2 00:06:34.715 16:19:26 thread -- scripts/common.sh@353 -- # local d=2 00:06:34.715 16:19:26 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.715 16:19:26 thread -- scripts/common.sh@355 -- # echo 2 00:06:34.715 16:19:26 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.715 16:19:26 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.715 16:19:26 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.715 16:19:26 thread -- scripts/common.sh@368 -- # return 0 00:06:34.715 16:19:26 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.715 16:19:26 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:34.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.715 --rc genhtml_branch_coverage=1 00:06:34.715 --rc genhtml_function_coverage=1 00:06:34.715 --rc genhtml_legend=1 00:06:34.715 --rc geninfo_all_blocks=1 00:06:34.715 --rc geninfo_unexecuted_blocks=1 00:06:34.715 00:06:34.715 ' 00:06:34.715 16:19:26 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:34.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.715 --rc genhtml_branch_coverage=1 00:06:34.715 --rc genhtml_function_coverage=1 00:06:34.715 --rc genhtml_legend=1 00:06:34.715 --rc geninfo_all_blocks=1 00:06:34.715 --rc geninfo_unexecuted_blocks=1 00:06:34.715 00:06:34.715 ' 00:06:34.715 16:19:26 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:34.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.715 --rc genhtml_branch_coverage=1 00:06:34.715 --rc genhtml_function_coverage=1 00:06:34.715 --rc genhtml_legend=1 00:06:34.715 --rc geninfo_all_blocks=1 00:06:34.715 --rc geninfo_unexecuted_blocks=1 00:06:34.715 00:06:34.715 ' 00:06:34.715 16:19:26 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:34.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.715 --rc genhtml_branch_coverage=1 00:06:34.715 --rc genhtml_function_coverage=1 00:06:34.715 --rc genhtml_legend=1 00:06:34.715 --rc geninfo_all_blocks=1 00:06:34.715 --rc geninfo_unexecuted_blocks=1 00:06:34.715 00:06:34.715 ' 00:06:34.715 16:19:26 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:34.715 16:19:26 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:34.715 16:19:26 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.715 16:19:26 thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.974 ************************************ 00:06:34.974 START TEST thread_poller_perf 00:06:34.974 ************************************ 00:06:34.974 16:19:26 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:34.974 [2024-11-28 16:19:26.536355] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:34.974 [2024-11-28 16:19:26.536466] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71340 ] 00:06:34.974 [2024-11-28 16:19:26.701995] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.234 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:35.234 [2024-11-28 16:19:26.746934] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.173 [2024-11-28T16:19:27.944Z] ====================================== 00:06:36.173 [2024-11-28T16:19:27.944Z] busy:2298134784 (cyc) 00:06:36.173 [2024-11-28T16:19:27.944Z] total_run_count: 431000 00:06:36.173 [2024-11-28T16:19:27.944Z] tsc_hz: 2290000000 (cyc) 00:06:36.173 [2024-11-28T16:19:27.944Z] ====================================== 00:06:36.173 [2024-11-28T16:19:27.944Z] poller_cost: 5332 (cyc), 2328 (nsec) 00:06:36.173 00:06:36.173 real 0m1.357s 00:06:36.173 user 0m1.153s 00:06:36.173 sys 0m0.099s 00:06:36.173 16:19:27 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.173 16:19:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:36.173 ************************************ 00:06:36.173 END TEST thread_poller_perf 00:06:36.173 ************************************ 00:06:36.173 16:19:27 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:36.173 16:19:27 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:36.173 16:19:27 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.173 16:19:27 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.173 ************************************ 00:06:36.173 START TEST thread_poller_perf 00:06:36.173 ************************************ 00:06:36.173 16:19:27 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:36.432 [2024-11-28 16:19:27.961512] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:36.432 [2024-11-28 16:19:27.961657] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71371 ] 00:06:36.432 [2024-11-28 16:19:28.119672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.432 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:36.432 [2024-11-28 16:19:28.165244] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.834 [2024-11-28T16:19:29.605Z] ====================================== 00:06:37.834 [2024-11-28T16:19:29.605Z] busy:2293559422 (cyc) 00:06:37.834 [2024-11-28T16:19:29.605Z] total_run_count: 5655000 00:06:37.834 [2024-11-28T16:19:29.605Z] tsc_hz: 2290000000 (cyc) 00:06:37.834 [2024-11-28T16:19:29.605Z] ====================================== 00:06:37.834 [2024-11-28T16:19:29.605Z] poller_cost: 405 (cyc), 176 (nsec) 00:06:37.834 00:06:37.834 real 0m1.340s 00:06:37.834 user 0m1.141s 00:06:37.834 sys 0m0.093s 00:06:37.834 16:19:29 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.834 16:19:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:37.834 ************************************ 00:06:37.834 END TEST thread_poller_perf 00:06:37.834 ************************************ 00:06:37.834 16:19:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:37.834 00:06:37.834 real 0m3.061s 00:06:37.834 user 0m2.458s 00:06:37.834 sys 0m0.406s 00:06:37.834 16:19:29 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.834 16:19:29 thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.834 ************************************ 00:06:37.834 END TEST thread 00:06:37.834 ************************************ 00:06:37.834 16:19:29 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:37.834 16:19:29 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:37.834 16:19:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.834 16:19:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.834 16:19:29 -- common/autotest_common.sh@10 -- # set +x 00:06:37.834 ************************************ 00:06:37.834 START TEST app_cmdline 00:06:37.834 ************************************ 00:06:37.834 16:19:29 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:37.834 * Looking for test storage... 00:06:37.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:37.834 16:19:29 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:37.834 16:19:29 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:37.834 16:19:29 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:37.834 16:19:29 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:37.834 16:19:29 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.834 16:19:29 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.834 16:19:29 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.834 16:19:29 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.834 16:19:29 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.834 16:19:29 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.834 16:19:29 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.834 16:19:29 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.834 16:19:29 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.834 16:19:29 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.834 16:19:29 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.834 16:19:29 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:37.834 16:19:29 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:37.834 16:19:29 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.834 16:19:29 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.834 16:19:29 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:37.834 16:19:29 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:37.834 16:19:29 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.834 16:19:29 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:37.834 16:19:29 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.834 16:19:29 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:38.094 16:19:29 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:38.094 16:19:29 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.094 16:19:29 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:38.094 16:19:29 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.094 16:19:29 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.094 16:19:29 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.094 16:19:29 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:38.094 16:19:29 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.094 16:19:29 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:38.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.094 --rc genhtml_branch_coverage=1 00:06:38.094 --rc genhtml_function_coverage=1 00:06:38.094 --rc genhtml_legend=1 00:06:38.094 --rc geninfo_all_blocks=1 00:06:38.094 --rc geninfo_unexecuted_blocks=1 00:06:38.094 00:06:38.094 ' 00:06:38.094 16:19:29 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:38.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.094 --rc genhtml_branch_coverage=1 00:06:38.094 --rc genhtml_function_coverage=1 00:06:38.094 --rc genhtml_legend=1 00:06:38.094 --rc geninfo_all_blocks=1 00:06:38.094 --rc geninfo_unexecuted_blocks=1 00:06:38.094 00:06:38.094 ' 00:06:38.094 16:19:29 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:38.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.094 --rc genhtml_branch_coverage=1 00:06:38.094 --rc genhtml_function_coverage=1 00:06:38.094 --rc genhtml_legend=1 00:06:38.094 --rc geninfo_all_blocks=1 00:06:38.094 --rc geninfo_unexecuted_blocks=1 00:06:38.094 00:06:38.094 ' 00:06:38.094 16:19:29 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:38.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.094 --rc genhtml_branch_coverage=1 00:06:38.094 --rc genhtml_function_coverage=1 00:06:38.094 --rc genhtml_legend=1 00:06:38.094 --rc geninfo_all_blocks=1 00:06:38.094 --rc geninfo_unexecuted_blocks=1 00:06:38.094 00:06:38.094 ' 00:06:38.094 16:19:29 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:38.094 16:19:29 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71460 00:06:38.094 16:19:29 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:38.094 16:19:29 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71460 00:06:38.094 16:19:29 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71460 ']' 00:06:38.094 16:19:29 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.094 16:19:29 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.094 16:19:29 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.094 16:19:29 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.094 16:19:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:38.094 [2024-11-28 16:19:29.704037] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:38.094 [2024-11-28 16:19:29.704183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71460 ] 00:06:38.353 [2024-11-28 16:19:29.863912] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.353 [2024-11-28 16:19:29.907422] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.950 16:19:30 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.950 16:19:30 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:38.950 16:19:30 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:38.950 { 00:06:38.950 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:06:38.950 "fields": { 00:06:38.950 "major": 24, 00:06:38.950 "minor": 9, 00:06:38.950 "patch": 1, 00:06:38.950 "suffix": "-pre", 00:06:38.950 "commit": "b18e1bd62" 00:06:38.950 } 00:06:38.950 } 00:06:38.950 16:19:30 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:38.950 16:19:30 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:38.950 16:19:30 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:38.950 16:19:30 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:38.950 16:19:30 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:38.950 16:19:30 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:38.950 16:19:30 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.950 16:19:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:38.950 16:19:30 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:38.950 16:19:30 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.209 16:19:30 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:39.209 16:19:30 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:39.209 16:19:30 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.209 16:19:30 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:39.209 16:19:30 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.209 16:19:30 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:39.209 16:19:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.209 16:19:30 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:39.209 16:19:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.209 16:19:30 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:39.209 16:19:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.209 16:19:30 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:39.209 16:19:30 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:39.209 16:19:30 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.209 request: 00:06:39.209 { 00:06:39.209 "method": "env_dpdk_get_mem_stats", 00:06:39.209 "req_id": 1 00:06:39.209 } 00:06:39.209 Got JSON-RPC error response 00:06:39.209 response: 00:06:39.209 { 00:06:39.209 "code": -32601, 00:06:39.209 "message": "Method not found" 00:06:39.209 } 00:06:39.209 16:19:30 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:39.209 16:19:30 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:39.209 16:19:30 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:39.209 16:19:30 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:39.209 16:19:30 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71460 00:06:39.209 16:19:30 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71460 ']' 00:06:39.209 16:19:30 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71460 00:06:39.209 16:19:30 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:39.209 16:19:30 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:39.209 16:19:30 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71460 00:06:39.468 killing process with pid 71460 00:06:39.468 16:19:30 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:39.468 16:19:30 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:39.468 16:19:30 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71460' 00:06:39.468 16:19:30 app_cmdline -- common/autotest_common.sh@969 -- # kill 71460 00:06:39.468 16:19:30 app_cmdline -- common/autotest_common.sh@974 -- # wait 71460 00:06:39.728 00:06:39.728 real 0m1.969s 00:06:39.728 user 0m2.161s 00:06:39.728 sys 0m0.564s 00:06:39.728 16:19:31 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.728 16:19:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:39.728 ************************************ 00:06:39.728 END TEST app_cmdline 00:06:39.728 ************************************ 00:06:39.728 16:19:31 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:39.728 16:19:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.728 16:19:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.728 16:19:31 -- common/autotest_common.sh@10 -- # set +x 00:06:39.728 ************************************ 00:06:39.728 START TEST version 00:06:39.728 ************************************ 00:06:39.728 16:19:31 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:39.989 * Looking for test storage... 00:06:39.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:39.989 16:19:31 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:39.989 16:19:31 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:39.989 16:19:31 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:39.989 16:19:31 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:39.989 16:19:31 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.989 16:19:31 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.989 16:19:31 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.989 16:19:31 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.989 16:19:31 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.989 16:19:31 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.989 16:19:31 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.989 16:19:31 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.989 16:19:31 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.989 16:19:31 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.989 16:19:31 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.989 16:19:31 version -- scripts/common.sh@344 -- # case "$op" in 00:06:39.989 16:19:31 version -- scripts/common.sh@345 -- # : 1 00:06:39.989 16:19:31 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.989 16:19:31 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.989 16:19:31 version -- scripts/common.sh@365 -- # decimal 1 00:06:39.989 16:19:31 version -- scripts/common.sh@353 -- # local d=1 00:06:39.989 16:19:31 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.989 16:19:31 version -- scripts/common.sh@355 -- # echo 1 00:06:39.989 16:19:31 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.989 16:19:31 version -- scripts/common.sh@366 -- # decimal 2 00:06:39.989 16:19:31 version -- scripts/common.sh@353 -- # local d=2 00:06:39.989 16:19:31 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.989 16:19:31 version -- scripts/common.sh@355 -- # echo 2 00:06:39.989 16:19:31 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.989 16:19:31 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.989 16:19:31 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.989 16:19:31 version -- scripts/common.sh@368 -- # return 0 00:06:39.989 16:19:31 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.989 16:19:31 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:39.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.989 --rc genhtml_branch_coverage=1 00:06:39.989 --rc genhtml_function_coverage=1 00:06:39.989 --rc genhtml_legend=1 00:06:39.989 --rc geninfo_all_blocks=1 00:06:39.989 --rc geninfo_unexecuted_blocks=1 00:06:39.989 00:06:39.989 ' 00:06:39.989 16:19:31 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:39.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.989 --rc genhtml_branch_coverage=1 00:06:39.989 --rc genhtml_function_coverage=1 00:06:39.989 --rc genhtml_legend=1 00:06:39.989 --rc geninfo_all_blocks=1 00:06:39.989 --rc geninfo_unexecuted_blocks=1 00:06:39.989 00:06:39.989 ' 00:06:39.989 16:19:31 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:39.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.989 --rc genhtml_branch_coverage=1 00:06:39.989 --rc genhtml_function_coverage=1 00:06:39.989 --rc genhtml_legend=1 00:06:39.989 --rc geninfo_all_blocks=1 00:06:39.989 --rc geninfo_unexecuted_blocks=1 00:06:39.989 00:06:39.989 ' 00:06:39.989 16:19:31 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:39.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.989 --rc genhtml_branch_coverage=1 00:06:39.989 --rc genhtml_function_coverage=1 00:06:39.989 --rc genhtml_legend=1 00:06:39.989 --rc geninfo_all_blocks=1 00:06:39.989 --rc geninfo_unexecuted_blocks=1 00:06:39.989 00:06:39.989 ' 00:06:39.989 16:19:31 version -- app/version.sh@17 -- # get_header_version major 00:06:39.989 16:19:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:39.989 16:19:31 version -- app/version.sh@14 -- # cut -f2 00:06:39.989 16:19:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:39.989 16:19:31 version -- app/version.sh@17 -- # major=24 00:06:39.989 16:19:31 version -- app/version.sh@18 -- # get_header_version minor 00:06:39.989 16:19:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:39.989 16:19:31 version -- app/version.sh@14 -- # cut -f2 00:06:39.989 16:19:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:39.989 16:19:31 version -- app/version.sh@18 -- # minor=9 00:06:39.989 16:19:31 version -- app/version.sh@19 -- # get_header_version patch 00:06:39.989 16:19:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:39.989 16:19:31 version -- app/version.sh@14 -- # cut -f2 00:06:39.989 16:19:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:39.989 16:19:31 version -- app/version.sh@19 -- # patch=1 00:06:39.989 16:19:31 version -- app/version.sh@20 -- # get_header_version suffix 00:06:39.989 16:19:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:39.989 16:19:31 version -- app/version.sh@14 -- # cut -f2 00:06:39.989 16:19:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:39.989 16:19:31 version -- app/version.sh@20 -- # suffix=-pre 00:06:39.989 16:19:31 version -- app/version.sh@22 -- # version=24.9 00:06:39.989 16:19:31 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:39.989 16:19:31 version -- app/version.sh@25 -- # version=24.9.1 00:06:39.989 16:19:31 version -- app/version.sh@28 -- # version=24.9.1rc0 00:06:39.989 16:19:31 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:39.989 16:19:31 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:39.989 16:19:31 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:06:39.989 16:19:31 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:06:39.989 00:06:39.989 real 0m0.311s 00:06:39.989 user 0m0.190s 00:06:39.989 sys 0m0.182s 00:06:39.989 16:19:31 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.989 16:19:31 version -- common/autotest_common.sh@10 -- # set +x 00:06:39.989 ************************************ 00:06:39.989 END TEST version 00:06:39.989 ************************************ 00:06:40.249 16:19:31 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:40.249 16:19:31 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:40.249 16:19:31 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:40.249 16:19:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.249 16:19:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.249 16:19:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.249 ************************************ 00:06:40.249 START TEST bdev_raid 00:06:40.249 ************************************ 00:06:40.249 16:19:31 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:40.249 * Looking for test storage... 00:06:40.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:40.249 16:19:31 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:40.249 16:19:31 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:06:40.249 16:19:31 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:40.249 16:19:31 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:40.249 16:19:31 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.249 16:19:31 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.249 16:19:31 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.249 16:19:31 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.249 16:19:31 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.249 16:19:31 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.249 16:19:31 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.249 16:19:31 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.249 16:19:31 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.249 16:19:31 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.249 16:19:31 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.249 16:19:31 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:40.249 16:19:31 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:40.249 16:19:31 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.249 16:19:31 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.249 16:19:32 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:40.249 16:19:32 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:40.249 16:19:32 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.249 16:19:32 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:40.249 16:19:32 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.249 16:19:32 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:40.249 16:19:32 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:40.249 16:19:32 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.249 16:19:32 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:40.249 16:19:32 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.249 16:19:32 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.249 16:19:32 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.249 16:19:32 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:40.249 16:19:32 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.249 16:19:32 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:40.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.249 --rc genhtml_branch_coverage=1 00:06:40.249 --rc genhtml_function_coverage=1 00:06:40.249 --rc genhtml_legend=1 00:06:40.249 --rc geninfo_all_blocks=1 00:06:40.249 --rc geninfo_unexecuted_blocks=1 00:06:40.249 00:06:40.249 ' 00:06:40.249 16:19:32 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:40.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.249 --rc genhtml_branch_coverage=1 00:06:40.249 --rc genhtml_function_coverage=1 00:06:40.249 --rc genhtml_legend=1 00:06:40.249 --rc geninfo_all_blocks=1 00:06:40.249 --rc geninfo_unexecuted_blocks=1 00:06:40.249 00:06:40.249 ' 00:06:40.249 16:19:32 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:40.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.249 --rc genhtml_branch_coverage=1 00:06:40.249 --rc genhtml_function_coverage=1 00:06:40.249 --rc genhtml_legend=1 00:06:40.249 --rc geninfo_all_blocks=1 00:06:40.249 --rc geninfo_unexecuted_blocks=1 00:06:40.249 00:06:40.249 ' 00:06:40.249 16:19:32 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:40.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.249 --rc genhtml_branch_coverage=1 00:06:40.249 --rc genhtml_function_coverage=1 00:06:40.249 --rc genhtml_legend=1 00:06:40.249 --rc geninfo_all_blocks=1 00:06:40.249 --rc geninfo_unexecuted_blocks=1 00:06:40.249 00:06:40.249 ' 00:06:40.249 16:19:32 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:40.249 16:19:32 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:40.249 16:19:32 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:40.509 16:19:32 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:40.509 16:19:32 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:40.509 16:19:32 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:40.509 16:19:32 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:40.509 16:19:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.509 16:19:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.509 16:19:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:40.509 ************************************ 00:06:40.509 START TEST raid1_resize_data_offset_test 00:06:40.509 ************************************ 00:06:40.509 16:19:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:06:40.509 16:19:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71620 00:06:40.509 16:19:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:40.509 16:19:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71620' 00:06:40.509 Process raid pid: 71620 00:06:40.509 16:19:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71620 00:06:40.509 16:19:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 71620 ']' 00:06:40.509 16:19:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.509 16:19:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.509 16:19:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.509 16:19:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.509 16:19:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.509 [2024-11-28 16:19:32.122417] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:40.510 [2024-11-28 16:19:32.122569] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:40.769 [2024-11-28 16:19:32.284855] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.769 [2024-11-28 16:19:32.328414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.769 [2024-11-28 16:19:32.369632] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.769 [2024-11-28 16:19:32.369671] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.338 16:19:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.338 16:19:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:06:41.338 16:19:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:41.338 16:19:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.338 16:19:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.338 malloc0 00:06:41.338 16:19:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.338 16:19:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:41.338 16:19:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.339 16:19:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.339 malloc1 00:06:41.339 16:19:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.339 16:19:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:41.339 16:19:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.339 16:19:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.339 null0 00:06:41.339 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.339 16:19:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:41.339 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.339 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.339 [2024-11-28 16:19:33.012573] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:41.339 [2024-11-28 16:19:33.014374] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:41.339 [2024-11-28 16:19:33.014433] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:41.339 [2024-11-28 16:19:33.014552] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:41.339 [2024-11-28 16:19:33.014567] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:41.339 [2024-11-28 16:19:33.014827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:06:41.339 [2024-11-28 16:19:33.014980] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:41.339 [2024-11-28 16:19:33.015000] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:41.339 [2024-11-28 16:19:33.015124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:41.339 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.339 16:19:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:41.339 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.339 16:19:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:41.339 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.339 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.339 16:19:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:41.339 16:19:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:41.339 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.339 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.339 [2024-11-28 16:19:33.072438] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:41.339 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.339 16:19:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:41.339 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.339 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.598 malloc2 00:06:41.598 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.598 16:19:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:41.598 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.599 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.599 [2024-11-28 16:19:33.200758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:41.599 [2024-11-28 16:19:33.205410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:41.599 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.599 [2024-11-28 16:19:33.207703] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:41.599 16:19:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:41.599 16:19:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:41.599 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.599 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.599 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.599 16:19:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:41.599 16:19:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71620 00:06:41.599 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 71620 ']' 00:06:41.599 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 71620 00:06:41.599 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:06:41.599 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.599 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71620 00:06:41.599 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.599 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.599 killing process with pid 71620 00:06:41.599 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71620' 00:06:41.599 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 71620 00:06:41.599 [2024-11-28 16:19:33.297355] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:41.599 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 71620 00:06:41.599 [2024-11-28 16:19:33.298930] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:41.599 [2024-11-28 16:19:33.298995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:41.599 [2024-11-28 16:19:33.299011] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:41.599 [2024-11-28 16:19:33.304310] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:41.599 [2024-11-28 16:19:33.304595] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:41.599 [2024-11-28 16:19:33.304625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:41.858 [2024-11-28 16:19:33.511209] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:42.118 16:19:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:42.118 00:06:42.118 real 0m1.706s 00:06:42.118 user 0m1.688s 00:06:42.118 sys 0m0.458s 00:06:42.118 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.118 16:19:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.118 ************************************ 00:06:42.118 END TEST raid1_resize_data_offset_test 00:06:42.118 ************************************ 00:06:42.118 16:19:33 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:42.118 16:19:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:42.118 16:19:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.118 16:19:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:42.118 ************************************ 00:06:42.118 START TEST raid0_resize_superblock_test 00:06:42.118 ************************************ 00:06:42.118 16:19:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:06:42.118 16:19:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:42.118 16:19:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71678 00:06:42.118 16:19:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:42.118 Process raid pid: 71678 00:06:42.118 16:19:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71678' 00:06:42.118 16:19:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71678 00:06:42.118 16:19:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71678 ']' 00:06:42.118 16:19:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.118 16:19:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.118 16:19:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.118 16:19:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.118 16:19:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.377 [2024-11-28 16:19:33.907111] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:42.377 [2024-11-28 16:19:33.907241] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.377 [2024-11-28 16:19:34.073988] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.377 [2024-11-28 16:19:34.117196] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.637 [2024-11-28 16:19:34.158259] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.637 [2024-11-28 16:19:34.158315] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.206 malloc0 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.206 [2024-11-28 16:19:34.835875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:43.206 [2024-11-28 16:19:34.835941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:43.206 [2024-11-28 16:19:34.835971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:43.206 [2024-11-28 16:19:34.835982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:43.206 [2024-11-28 16:19:34.838044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:43.206 [2024-11-28 16:19:34.838087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:43.206 pt0 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.206 3c4414f3-dc63-43ec-96d2-4099d02c2811 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.206 44df4cfc-7020-4115-85b1-fc84a75c19c0 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.206 23fbd625-a2a5-4adb-83fc-86d4b2315093 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.206 [2024-11-28 16:19:34.970584] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 44df4cfc-7020-4115-85b1-fc84a75c19c0 is claimed 00:06:43.206 [2024-11-28 16:19:34.970684] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 23fbd625-a2a5-4adb-83fc-86d4b2315093 is claimed 00:06:43.206 [2024-11-28 16:19:34.970798] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:43.206 [2024-11-28 16:19:34.970813] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:43.206 [2024-11-28 16:19:34.971064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:43.206 [2024-11-28 16:19:34.971221] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:43.206 [2024-11-28 16:19:34.971239] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:43.206 [2024-11-28 16:19:34.971366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:43.206 16:19:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.466 16:19:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:43.466 16:19:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:43.466 16:19:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.466 16:19:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.466 [2024-11-28 16:19:35.054657] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.466 [2024-11-28 16:19:35.098501] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:43.466 [2024-11-28 16:19:35.098531] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '44df4cfc-7020-4115-85b1-fc84a75c19c0' was resized: old size 131072, new size 204800 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.466 [2024-11-28 16:19:35.110397] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:43.466 [2024-11-28 16:19:35.110424] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '23fbd625-a2a5-4adb-83fc-86d4b2315093' was resized: old size 131072, new size 204800 00:06:43.466 [2024-11-28 16:19:35.110448] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:43.466 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.467 [2024-11-28 16:19:35.198351] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.467 [2024-11-28 16:19:35.226138] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:43.467 [2024-11-28 16:19:35.226203] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:43.467 [2024-11-28 16:19:35.226213] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:43.467 [2024-11-28 16:19:35.226235] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:43.467 [2024-11-28 16:19:35.226329] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:43.467 [2024-11-28 16:19:35.226362] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:43.467 [2024-11-28 16:19:35.226372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.467 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.726 [2024-11-28 16:19:35.238058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:43.726 [2024-11-28 16:19:35.238114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:43.726 [2024-11-28 16:19:35.238133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:43.726 [2024-11-28 16:19:35.238144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:43.726 [2024-11-28 16:19:35.240153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:43.726 [2024-11-28 16:19:35.240191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:43.726 [2024-11-28 16:19:35.241519] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 44df4cfc-7020-4115-85b1-fc84a75c19c0 00:06:43.726 [2024-11-28 16:19:35.241582] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 44df4cfc-7020-4115-85b1-fc84a75c19c0 is claimed 00:06:43.726 [2024-11-28 16:19:35.241660] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 23fbd625-a2a5-4adb-83fc-86d4b2315093 00:06:43.726 [2024-11-28 16:19:35.241679] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 23fbd625-a2a5-4adb-83fc-86d4b2315093 is claimed 00:06:43.726 [2024-11-28 16:19:35.241752] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 23fbd625-a2a5-4adb-83fc-86d4b2315093 (2) smaller than existing raid bdev Raid (3) 00:06:43.726 [2024-11-28 16:19:35.241779] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 44df4cfc-7020-4115-85b1-fc84a75c19c0: File exists 00:06:43.726 [2024-11-28 16:19:35.241815] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:06:43.726 [2024-11-28 16:19:35.241823] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:43.726 [2024-11-28 16:19:35.242046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:43.726 [2024-11-28 16:19:35.242180] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:06:43.726 [2024-11-28 16:19:35.242194] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:06:43.726 [2024-11-28 16:19:35.242328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:43.726 pt0 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.726 [2024-11-28 16:19:35.266464] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71678 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71678 ']' 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71678 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71678 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.726 killing process with pid 71678 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71678' 00:06:43.726 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71678 00:06:43.726 [2024-11-28 16:19:35.350748] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:43.727 [2024-11-28 16:19:35.350805] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:43.727 [2024-11-28 16:19:35.350853] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:43.727 [2024-11-28 16:19:35.350862] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:06:43.727 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71678 00:06:43.984 [2024-11-28 16:19:35.507308] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:43.984 16:19:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:43.984 00:06:43.984 real 0m1.932s 00:06:43.984 user 0m2.141s 00:06:43.984 sys 0m0.504s 00:06:43.984 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.984 16:19:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.984 ************************************ 00:06:43.984 END TEST raid0_resize_superblock_test 00:06:43.984 ************************************ 00:06:44.243 16:19:35 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:44.243 16:19:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:44.243 16:19:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.243 16:19:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:44.243 ************************************ 00:06:44.243 START TEST raid1_resize_superblock_test 00:06:44.243 ************************************ 00:06:44.244 16:19:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:06:44.244 16:19:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:44.244 16:19:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71749 00:06:44.244 16:19:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:44.244 16:19:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71749' 00:06:44.244 Process raid pid: 71749 00:06:44.244 16:19:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71749 00:06:44.244 16:19:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71749 ']' 00:06:44.244 16:19:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.244 16:19:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.244 16:19:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.244 16:19:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.244 16:19:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.244 [2024-11-28 16:19:35.905184] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:44.244 [2024-11-28 16:19:35.905324] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:44.504 [2024-11-28 16:19:36.070511] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.504 [2024-11-28 16:19:36.115108] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.504 [2024-11-28 16:19:36.157309] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:44.504 [2024-11-28 16:19:36.157358] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:45.073 16:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.073 16:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:45.073 16:19:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:45.073 16:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.073 16:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.073 malloc0 00:06:45.073 16:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.073 16:19:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:45.073 16:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.073 16:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.332 [2024-11-28 16:19:36.846280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:45.332 [2024-11-28 16:19:36.846349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:45.332 [2024-11-28 16:19:36.846378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:45.332 [2024-11-28 16:19:36.846396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:45.332 [2024-11-28 16:19:36.848520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:45.332 [2024-11-28 16:19:36.848564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:45.332 pt0 00:06:45.332 16:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.332 16:19:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:45.332 16:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.332 16:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.332 f57d7674-184a-4b2b-b3c4-3971b66b7c62 00:06:45.332 16:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.332 16:19:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:45.332 16:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.332 16:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.332 4a6a21f6-7c5a-4200-91f3-4191b69938f3 00:06:45.332 16:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.332 16:19:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:45.332 16:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.332 16:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.332 b7b3d7c2-54e2-46ea-a11d-27bac27cba66 00:06:45.332 16:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.332 16:19:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:45.332 16:19:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:45.332 16:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.332 16:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.332 [2024-11-28 16:19:36.981274] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4a6a21f6-7c5a-4200-91f3-4191b69938f3 is claimed 00:06:45.332 [2024-11-28 16:19:36.981354] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev b7b3d7c2-54e2-46ea-a11d-27bac27cba66 is claimed 00:06:45.332 [2024-11-28 16:19:36.981461] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:45.332 [2024-11-28 16:19:36.981475] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:45.332 [2024-11-28 16:19:36.981741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:45.332 [2024-11-28 16:19:36.981914] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:45.332 [2024-11-28 16:19:36.981938] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:45.332 [2024-11-28 16:19:36.982064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:45.332 16:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.332 16:19:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:45.332 16:19:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:45.332 16:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.332 16:19:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.332 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.332 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:45.332 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:45.332 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.332 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.332 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:45.332 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.332 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:45.332 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:45.332 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:45.332 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.332 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:45.332 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:45.332 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.332 [2024-11-28 16:19:37.093336] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.593 [2024-11-28 16:19:37.141154] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:45.593 [2024-11-28 16:19:37.141182] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '4a6a21f6-7c5a-4200-91f3-4191b69938f3' was resized: old size 131072, new size 204800 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.593 [2024-11-28 16:19:37.153050] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:45.593 [2024-11-28 16:19:37.153076] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b7b3d7c2-54e2-46ea-a11d-27bac27cba66' was resized: old size 131072, new size 204800 00:06:45.593 [2024-11-28 16:19:37.153099] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.593 [2024-11-28 16:19:37.245078] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.593 [2024-11-28 16:19:37.288806] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:45.593 [2024-11-28 16:19:37.288887] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:45.593 [2024-11-28 16:19:37.288916] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:45.593 [2024-11-28 16:19:37.289075] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:45.593 [2024-11-28 16:19:37.289255] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:45.593 [2024-11-28 16:19:37.289322] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:45.593 [2024-11-28 16:19:37.289346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.593 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.593 [2024-11-28 16:19:37.300697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:45.593 [2024-11-28 16:19:37.300760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:45.593 [2024-11-28 16:19:37.300782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:45.593 [2024-11-28 16:19:37.300795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:45.593 [2024-11-28 16:19:37.302851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:45.593 [2024-11-28 16:19:37.302899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:45.593 [2024-11-28 16:19:37.304281] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 4a6a21f6-7c5a-4200-91f3-4191b69938f3 00:06:45.593 [2024-11-28 16:19:37.304344] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4a6a21f6-7c5a-4200-91f3-4191b69938f3 is claimed 00:06:45.593 [2024-11-28 16:19:37.304424] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b7b3d7c2-54e2-46ea-a11d-27bac27cba66 00:06:45.593 [2024-11-28 16:19:37.304445] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev b7b3d7c2-54e2-46ea-a11d-27bac27cba66 is claimed 00:06:45.593 [2024-11-28 16:19:37.304525] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev b7b3d7c2-54e2-46ea-a11d-27bac27cba66 (2) smaller than existing raid bdev Raid (3) 00:06:45.593 [2024-11-28 16:19:37.304552] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 4a6a21f6-7c5a-4200-91f3-4191b69938f3: File exists 00:06:45.593 [2024-11-28 16:19:37.304589] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:06:45.593 [2024-11-28 16:19:37.304598] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:45.593 [2024-11-28 16:19:37.304815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:45.593 [2024-11-28 16:19:37.304958] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:06:45.593 [2024-11-28 16:19:37.304973] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:06:45.594 [2024-11-28 16:19:37.305116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:45.594 pt0 00:06:45.594 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.594 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:45.594 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.594 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.594 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.594 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:45.594 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:45.594 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.594 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:45.594 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:45.594 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.594 [2024-11-28 16:19:37.329296] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:45.594 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.853 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:45.853 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:45.853 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:45.853 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71749 00:06:45.853 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71749 ']' 00:06:45.853 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71749 00:06:45.853 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:45.853 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.853 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71749 00:06:45.854 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.854 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.854 killing process with pid 71749 00:06:45.854 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71749' 00:06:45.854 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71749 00:06:45.854 [2024-11-28 16:19:37.397681] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:45.854 [2024-11-28 16:19:37.397757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:45.854 [2024-11-28 16:19:37.397809] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:45.854 [2024-11-28 16:19:37.397819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:06:45.854 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71749 00:06:45.854 [2024-11-28 16:19:37.555205] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:46.114 16:19:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:46.114 00:06:46.114 real 0m1.982s 00:06:46.114 user 0m2.238s 00:06:46.114 sys 0m0.493s 00:06:46.114 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.114 16:19:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.114 ************************************ 00:06:46.114 END TEST raid1_resize_superblock_test 00:06:46.114 ************************************ 00:06:46.114 16:19:37 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:46.114 16:19:37 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:46.114 16:19:37 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:46.114 16:19:37 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:46.114 16:19:37 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:46.114 16:19:37 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:46.114 16:19:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:46.114 16:19:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.114 16:19:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:46.374 ************************************ 00:06:46.374 START TEST raid_function_test_raid0 00:06:46.374 ************************************ 00:06:46.374 16:19:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:06:46.374 16:19:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:46.374 16:19:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:46.374 16:19:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:46.374 16:19:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=71825 00:06:46.374 16:19:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:46.374 16:19:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71825' 00:06:46.374 Process raid pid: 71825 00:06:46.374 16:19:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 71825 00:06:46.374 16:19:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 71825 ']' 00:06:46.374 16:19:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.374 16:19:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.374 16:19:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.374 16:19:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.374 16:19:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:46.374 [2024-11-28 16:19:37.977871] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:46.374 [2024-11-28 16:19:37.978012] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.374 [2024-11-28 16:19:38.139511] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.635 [2024-11-28 16:19:38.184294] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.635 [2024-11-28 16:19:38.226259] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.635 [2024-11-28 16:19:38.226307] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:47.204 16:19:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.204 16:19:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:06:47.204 16:19:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:47.204 16:19:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.204 16:19:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:47.204 Base_1 00:06:47.204 16:19:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.204 16:19:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:47.204 16:19:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.204 16:19:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:47.204 Base_2 00:06:47.204 16:19:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.204 16:19:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:47.204 16:19:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.204 16:19:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:47.204 [2024-11-28 16:19:38.850547] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:47.204 [2024-11-28 16:19:38.853680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:47.204 [2024-11-28 16:19:38.853784] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:47.205 [2024-11-28 16:19:38.853803] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:47.205 [2024-11-28 16:19:38.854202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:47.205 [2024-11-28 16:19:38.854378] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:47.205 [2024-11-28 16:19:38.854402] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:06:47.205 [2024-11-28 16:19:38.854653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.205 16:19:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.205 16:19:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:47.205 16:19:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:47.205 16:19:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.205 16:19:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:47.205 16:19:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.205 16:19:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:47.205 16:19:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:47.205 16:19:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:47.205 16:19:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:47.205 16:19:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:47.205 16:19:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:47.205 16:19:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:47.205 16:19:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:47.205 16:19:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:47.205 16:19:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:47.205 16:19:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:47.205 16:19:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:47.465 [2024-11-28 16:19:39.074240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:47.465 /dev/nbd0 00:06:47.465 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:47.465 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:47.465 16:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:47.465 16:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:06:47.465 16:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:47.465 16:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:47.465 16:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:47.465 16:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:06:47.465 16:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:47.465 16:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:47.465 16:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.465 1+0 records in 00:06:47.465 1+0 records out 00:06:47.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259898 s, 15.8 MB/s 00:06:47.465 16:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.465 16:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:06:47.465 16:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.465 16:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:47.465 16:19:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:06:47.465 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.465 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:47.465 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:47.465 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:47.465 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:47.725 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:47.725 { 00:06:47.725 "nbd_device": "/dev/nbd0", 00:06:47.725 "bdev_name": "raid" 00:06:47.725 } 00:06:47.725 ]' 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:47.726 { 00:06:47.726 "nbd_device": "/dev/nbd0", 00:06:47.726 "bdev_name": "raid" 00:06:47.726 } 00:06:47.726 ]' 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:47.726 4096+0 records in 00:06:47.726 4096+0 records out 00:06:47.726 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0343023 s, 61.1 MB/s 00:06:47.726 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:47.985 4096+0 records in 00:06:47.985 4096+0 records out 00:06:47.985 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.183081 s, 11.5 MB/s 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:47.985 128+0 records in 00:06:47.985 128+0 records out 00:06:47.985 65536 bytes (66 kB, 64 KiB) copied, 0.00123107 s, 53.2 MB/s 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:47.985 2035+0 records in 00:06:47.985 2035+0 records out 00:06:47.985 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0138128 s, 75.4 MB/s 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:47.985 456+0 records in 00:06:47.985 456+0 records out 00:06:47.985 233472 bytes (233 kB, 228 KiB) copied, 0.00427284 s, 54.6 MB/s 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:47.985 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:48.244 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:48.244 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:48.244 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:48.244 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:48.244 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:48.244 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:48.244 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:48.244 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:48.244 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.244 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:48.244 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:48.244 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:48.244 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:48.244 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.244 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.244 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:48.244 [2024-11-28 16:19:39.964760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:48.244 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:48.244 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.244 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:48.244 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:48.244 16:19:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:48.504 16:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:48.504 16:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:48.504 16:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:48.504 16:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:48.504 16:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:48.504 16:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.504 16:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:48.504 16:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:48.504 16:19:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:48.504 16:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:48.504 16:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:48.504 16:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 71825 00:06:48.504 16:19:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 71825 ']' 00:06:48.504 16:19:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 71825 00:06:48.504 16:19:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:06:48.504 16:19:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.504 16:19:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71825 00:06:48.763 killing process with pid 71825 00:06:48.763 16:19:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.763 16:19:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.763 16:19:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71825' 00:06:48.763 16:19:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 71825 00:06:48.763 [2024-11-28 16:19:40.275302] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:48.763 [2024-11-28 16:19:40.275427] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:48.763 16:19:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 71825 00:06:48.763 [2024-11-28 16:19:40.275477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:48.763 [2024-11-28 16:19:40.275490] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:06:48.763 [2024-11-28 16:19:40.298521] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:49.022 16:19:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:49.022 00:06:49.022 real 0m2.645s 00:06:49.022 user 0m3.249s 00:06:49.022 sys 0m0.905s 00:06:49.022 16:19:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.022 ************************************ 00:06:49.022 END TEST raid_function_test_raid0 00:06:49.022 ************************************ 00:06:49.022 16:19:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:49.022 16:19:40 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:49.022 16:19:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:49.022 16:19:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.022 16:19:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:49.022 ************************************ 00:06:49.022 START TEST raid_function_test_concat 00:06:49.022 ************************************ 00:06:49.022 16:19:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:06:49.022 16:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:49.022 16:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:49.022 16:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:49.022 16:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=71940 00:06:49.022 16:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:49.022 16:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71940' 00:06:49.022 Process raid pid: 71940 00:06:49.022 16:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 71940 00:06:49.022 16:19:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 71940 ']' 00:06:49.022 16:19:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.022 16:19:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.022 16:19:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.022 16:19:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.022 16:19:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:49.022 [2024-11-28 16:19:40.689234] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:49.022 [2024-11-28 16:19:40.689453] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.281 [2024-11-28 16:19:40.849933] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.281 [2024-11-28 16:19:40.895025] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.281 [2024-11-28 16:19:40.937587] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.281 [2024-11-28 16:19:40.937675] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.858 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.858 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:06:49.858 16:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:49.858 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.858 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:49.858 Base_1 00:06:49.858 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.858 16:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:49.858 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.858 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:49.858 Base_2 00:06:49.858 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.858 16:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:49.858 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.858 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:49.858 [2024-11-28 16:19:41.574110] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:49.858 [2024-11-28 16:19:41.577744] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:49.858 [2024-11-28 16:19:41.577896] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:49.858 [2024-11-28 16:19:41.577923] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:49.858 [2024-11-28 16:19:41.578442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:49.858 [2024-11-28 16:19:41.578711] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:49.858 [2024-11-28 16:19:41.578733] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:06:49.858 [2024-11-28 16:19:41.579079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.858 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.858 16:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:49.858 16:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:49.858 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.858 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:49.858 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:50.145 [2024-11-28 16:19:41.809615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:50.145 /dev/nbd0 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:50.145 1+0 records in 00:06:50.145 1+0 records out 00:06:50.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351365 s, 11.7 MB/s 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:50.145 16:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.405 { 00:06:50.405 "nbd_device": "/dev/nbd0", 00:06:50.405 "bdev_name": "raid" 00:06:50.405 } 00:06:50.405 ]' 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.405 { 00:06:50.405 "nbd_device": "/dev/nbd0", 00:06:50.405 "bdev_name": "raid" 00:06:50.405 } 00:06:50.405 ]' 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:50.405 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:50.666 4096+0 records in 00:06:50.666 4096+0 records out 00:06:50.666 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0313484 s, 66.9 MB/s 00:06:50.666 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:50.666 4096+0 records in 00:06:50.666 4096+0 records out 00:06:50.666 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.177941 s, 11.8 MB/s 00:06:50.666 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:50.666 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:50.666 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:50.666 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:50.666 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:50.666 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:50.666 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:50.666 128+0 records in 00:06:50.666 128+0 records out 00:06:50.666 65536 bytes (66 kB, 64 KiB) copied, 0.00128867 s, 50.9 MB/s 00:06:50.666 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:50.666 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:50.666 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:50.666 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:50.666 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:50.666 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:50.666 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:50.666 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:50.926 2035+0 records in 00:06:50.926 2035+0 records out 00:06:50.926 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0153665 s, 67.8 MB/s 00:06:50.926 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:50.926 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:50.926 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:50.926 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:50.926 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:50.926 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:50.926 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:50.926 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:50.926 456+0 records in 00:06:50.926 456+0 records out 00:06:50.926 233472 bytes (233 kB, 228 KiB) copied, 0.00238243 s, 98.0 MB/s 00:06:50.926 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:50.926 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:50.926 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:50.926 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:50.926 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:50.926 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:50.926 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:50.926 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:50.926 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:50.926 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:50.926 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:50.926 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.926 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:51.186 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:51.186 [2024-11-28 16:19:42.716185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.186 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:51.186 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:51.186 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.186 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.186 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:51.186 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:51.186 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.186 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:51.186 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:51.186 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:51.186 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:51.186 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.186 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:51.445 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:51.445 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:51.445 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.445 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:51.445 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:51.445 16:19:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:51.445 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:51.445 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:51.445 16:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 71940 00:06:51.445 16:19:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 71940 ']' 00:06:51.445 16:19:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 71940 00:06:51.445 16:19:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:06:51.445 16:19:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.446 16:19:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71940 00:06:51.446 killing process with pid 71940 00:06:51.446 16:19:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.446 16:19:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.446 16:19:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71940' 00:06:51.446 16:19:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 71940 00:06:51.446 [2024-11-28 16:19:43.008743] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:51.446 [2024-11-28 16:19:43.008867] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:51.446 16:19:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 71940 00:06:51.446 [2024-11-28 16:19:43.008943] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:51.446 [2024-11-28 16:19:43.008960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:06:51.446 [2024-11-28 16:19:43.032177] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:51.706 ************************************ 00:06:51.706 END TEST raid_function_test_concat 00:06:51.706 ************************************ 00:06:51.706 16:19:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:51.706 00:06:51.706 real 0m2.662s 00:06:51.706 user 0m3.240s 00:06:51.706 sys 0m0.937s 00:06:51.706 16:19:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.706 16:19:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:51.706 16:19:43 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:51.706 16:19:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:51.706 16:19:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.706 16:19:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:51.706 ************************************ 00:06:51.706 START TEST raid0_resize_test 00:06:51.706 ************************************ 00:06:51.706 16:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:06:51.706 16:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:51.706 16:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:51.706 16:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:51.706 16:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:51.706 16:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:51.706 16:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:51.706 16:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:51.706 16:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:51.706 16:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72050 00:06:51.706 16:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:51.706 16:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72050' 00:06:51.706 Process raid pid: 72050 00:06:51.706 16:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72050 00:06:51.706 16:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72050 ']' 00:06:51.706 16:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.706 16:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.706 16:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.706 16:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.706 16:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.706 [2024-11-28 16:19:43.423594] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:51.706 [2024-11-28 16:19:43.423861] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.966 [2024-11-28 16:19:43.577313] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.966 [2024-11-28 16:19:43.622686] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.966 [2024-11-28 16:19:43.664782] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.966 [2024-11-28 16:19:43.664815] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.536 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.536 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:06:52.536 16:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:52.536 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.536 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.536 Base_1 00:06:52.536 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.536 16:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:52.536 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.536 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.536 Base_2 00:06:52.536 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.536 16:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:52.536 16:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:52.536 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.536 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.536 [2024-11-28 16:19:44.277708] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:52.536 [2024-11-28 16:19:44.279430] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:52.536 [2024-11-28 16:19:44.279484] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:52.536 [2024-11-28 16:19:44.279500] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:52.536 [2024-11-28 16:19:44.279752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:06:52.536 [2024-11-28 16:19:44.279896] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:52.537 [2024-11-28 16:19:44.279911] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:52.537 [2024-11-28 16:19:44.280008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.537 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.537 16:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:52.537 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.537 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.537 [2024-11-28 16:19:44.289643] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:52.537 [2024-11-28 16:19:44.289709] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:52.537 true 00:06:52.537 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.537 16:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.537 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.537 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.537 16:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:52.537 [2024-11-28 16:19:44.301817] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.797 [2024-11-28 16:19:44.349554] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:52.797 [2024-11-28 16:19:44.349621] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:52.797 [2024-11-28 16:19:44.349651] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:52.797 true 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:52.797 [2024-11-28 16:19:44.361691] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72050 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72050 ']' 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 72050 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72050 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72050' 00:06:52.797 killing process with pid 72050 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 72050 00:06:52.797 [2024-11-28 16:19:44.446144] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:52.797 [2024-11-28 16:19:44.446272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.797 [2024-11-28 16:19:44.446341] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.797 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 72050 00:06:52.797 [2024-11-28 16:19:44.446386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:52.797 [2024-11-28 16:19:44.447873] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:53.057 16:19:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:53.057 00:06:53.057 real 0m1.348s 00:06:53.057 user 0m1.502s 00:06:53.057 sys 0m0.300s 00:06:53.057 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.057 16:19:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.057 ************************************ 00:06:53.057 END TEST raid0_resize_test 00:06:53.057 ************************************ 00:06:53.057 16:19:44 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:53.057 16:19:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:53.057 16:19:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.057 16:19:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:53.057 ************************************ 00:06:53.057 START TEST raid1_resize_test 00:06:53.057 ************************************ 00:06:53.057 16:19:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:06:53.057 16:19:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:53.057 16:19:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:53.057 16:19:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:53.057 16:19:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:53.057 16:19:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:53.057 16:19:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:53.057 16:19:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:53.057 16:19:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:53.057 Process raid pid: 72101 00:06:53.057 16:19:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72101 00:06:53.057 16:19:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:53.057 16:19:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72101' 00:06:53.057 16:19:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72101 00:06:53.057 16:19:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72101 ']' 00:06:53.057 16:19:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.057 16:19:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.057 16:19:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.057 16:19:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.057 16:19:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.317 [2024-11-28 16:19:44.841310] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:53.317 [2024-11-28 16:19:44.841547] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.317 [2024-11-28 16:19:45.003038] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.317 [2024-11-28 16:19:45.047128] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.576 [2024-11-28 16:19:45.088982] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.576 [2024-11-28 16:19:45.089087] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.146 Base_1 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.146 Base_2 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.146 [2024-11-28 16:19:45.693996] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:54.146 [2024-11-28 16:19:45.695667] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:54.146 [2024-11-28 16:19:45.695746] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:54.146 [2024-11-28 16:19:45.695757] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:54.146 [2024-11-28 16:19:45.696026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:06:54.146 [2024-11-28 16:19:45.696138] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:54.146 [2024-11-28 16:19:45.696147] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:54.146 [2024-11-28 16:19:45.696255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.146 [2024-11-28 16:19:45.705974] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:54.146 [2024-11-28 16:19:45.706039] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:54.146 true 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.146 [2024-11-28 16:19:45.722129] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.146 [2024-11-28 16:19:45.769912] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:54.146 [2024-11-28 16:19:45.769934] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:54.146 [2024-11-28 16:19:45.769959] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:54.146 true 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.146 [2024-11-28 16:19:45.786043] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72101 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72101 ']' 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 72101 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72101 00:06:54.146 killing process with pid 72101 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72101' 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 72101 00:06:54.146 [2024-11-28 16:19:45.856199] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:54.146 [2024-11-28 16:19:45.856267] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:54.146 16:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 72101 00:06:54.146 [2024-11-28 16:19:45.856655] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:54.146 [2024-11-28 16:19:45.856728] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:54.147 [2024-11-28 16:19:45.857824] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:54.406 ************************************ 00:06:54.406 END TEST raid1_resize_test 00:06:54.406 ************************************ 00:06:54.407 16:19:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:54.407 00:06:54.407 real 0m1.339s 00:06:54.407 user 0m1.475s 00:06:54.407 sys 0m0.321s 00:06:54.407 16:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.407 16:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.407 16:19:46 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:54.407 16:19:46 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:54.407 16:19:46 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:54.407 16:19:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:54.407 16:19:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.407 16:19:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:54.407 ************************************ 00:06:54.407 START TEST raid_state_function_test 00:06:54.407 ************************************ 00:06:54.407 16:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:06:54.407 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:54.407 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:54.407 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:54.407 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:54.407 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:54.407 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:54.407 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:54.407 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:54.407 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:54.407 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:54.407 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:54.407 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:54.407 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:54.407 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:54.667 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:54.667 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:54.667 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:54.667 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:54.667 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:54.667 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:54.667 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:54.667 Process raid pid: 72148 00:06:54.667 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:54.667 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:54.667 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72148 00:06:54.667 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72148' 00:06:54.667 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:54.667 16:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72148 00:06:54.667 16:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 72148 ']' 00:06:54.667 16:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.667 16:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.667 16:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.667 16:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.667 16:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.667 [2024-11-28 16:19:46.262169] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:54.667 [2024-11-28 16:19:46.262323] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.667 [2024-11-28 16:19:46.423704] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.926 [2024-11-28 16:19:46.470675] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.926 [2024-11-28 16:19:46.512863] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.926 [2024-11-28 16:19:46.513000] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.496 [2024-11-28 16:19:47.082221] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:55.496 [2024-11-28 16:19:47.082280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:55.496 [2024-11-28 16:19:47.082292] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:55.496 [2024-11-28 16:19:47.082301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:55.496 "name": "Existed_Raid", 00:06:55.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.496 "strip_size_kb": 64, 00:06:55.496 "state": "configuring", 00:06:55.496 "raid_level": "raid0", 00:06:55.496 "superblock": false, 00:06:55.496 "num_base_bdevs": 2, 00:06:55.496 "num_base_bdevs_discovered": 0, 00:06:55.496 "num_base_bdevs_operational": 2, 00:06:55.496 "base_bdevs_list": [ 00:06:55.496 { 00:06:55.496 "name": "BaseBdev1", 00:06:55.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.496 "is_configured": false, 00:06:55.496 "data_offset": 0, 00:06:55.496 "data_size": 0 00:06:55.496 }, 00:06:55.496 { 00:06:55.496 "name": "BaseBdev2", 00:06:55.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.496 "is_configured": false, 00:06:55.496 "data_offset": 0, 00:06:55.496 "data_size": 0 00:06:55.496 } 00:06:55.496 ] 00:06:55.496 }' 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:55.496 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.755 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:55.755 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.755 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.755 [2024-11-28 16:19:47.517384] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:55.755 [2024-11-28 16:19:47.517513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:06:55.755 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.755 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:55.755 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.755 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.016 [2024-11-28 16:19:47.529389] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:56.016 [2024-11-28 16:19:47.529432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:56.016 [2024-11-28 16:19:47.529441] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:56.016 [2024-11-28 16:19:47.529450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.016 [2024-11-28 16:19:47.550073] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:56.016 BaseBdev1 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.016 [ 00:06:56.016 { 00:06:56.016 "name": "BaseBdev1", 00:06:56.016 "aliases": [ 00:06:56.016 "9401f316-dd22-4341-99d5-871d97cc27af" 00:06:56.016 ], 00:06:56.016 "product_name": "Malloc disk", 00:06:56.016 "block_size": 512, 00:06:56.016 "num_blocks": 65536, 00:06:56.016 "uuid": "9401f316-dd22-4341-99d5-871d97cc27af", 00:06:56.016 "assigned_rate_limits": { 00:06:56.016 "rw_ios_per_sec": 0, 00:06:56.016 "rw_mbytes_per_sec": 0, 00:06:56.016 "r_mbytes_per_sec": 0, 00:06:56.016 "w_mbytes_per_sec": 0 00:06:56.016 }, 00:06:56.016 "claimed": true, 00:06:56.016 "claim_type": "exclusive_write", 00:06:56.016 "zoned": false, 00:06:56.016 "supported_io_types": { 00:06:56.016 "read": true, 00:06:56.016 "write": true, 00:06:56.016 "unmap": true, 00:06:56.016 "flush": true, 00:06:56.016 "reset": true, 00:06:56.016 "nvme_admin": false, 00:06:56.016 "nvme_io": false, 00:06:56.016 "nvme_io_md": false, 00:06:56.016 "write_zeroes": true, 00:06:56.016 "zcopy": true, 00:06:56.016 "get_zone_info": false, 00:06:56.016 "zone_management": false, 00:06:56.016 "zone_append": false, 00:06:56.016 "compare": false, 00:06:56.016 "compare_and_write": false, 00:06:56.016 "abort": true, 00:06:56.016 "seek_hole": false, 00:06:56.016 "seek_data": false, 00:06:56.016 "copy": true, 00:06:56.016 "nvme_iov_md": false 00:06:56.016 }, 00:06:56.016 "memory_domains": [ 00:06:56.016 { 00:06:56.016 "dma_device_id": "system", 00:06:56.016 "dma_device_type": 1 00:06:56.016 }, 00:06:56.016 { 00:06:56.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.016 "dma_device_type": 2 00:06:56.016 } 00:06:56.016 ], 00:06:56.016 "driver_specific": {} 00:06:56.016 } 00:06:56.016 ] 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.016 "name": "Existed_Raid", 00:06:56.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.016 "strip_size_kb": 64, 00:06:56.016 "state": "configuring", 00:06:56.016 "raid_level": "raid0", 00:06:56.016 "superblock": false, 00:06:56.016 "num_base_bdevs": 2, 00:06:56.016 "num_base_bdevs_discovered": 1, 00:06:56.016 "num_base_bdevs_operational": 2, 00:06:56.016 "base_bdevs_list": [ 00:06:56.016 { 00:06:56.016 "name": "BaseBdev1", 00:06:56.016 "uuid": "9401f316-dd22-4341-99d5-871d97cc27af", 00:06:56.016 "is_configured": true, 00:06:56.016 "data_offset": 0, 00:06:56.016 "data_size": 65536 00:06:56.016 }, 00:06:56.016 { 00:06:56.016 "name": "BaseBdev2", 00:06:56.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.016 "is_configured": false, 00:06:56.016 "data_offset": 0, 00:06:56.016 "data_size": 0 00:06:56.016 } 00:06:56.016 ] 00:06:56.016 }' 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.016 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.276 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:56.276 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.276 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.276 [2024-11-28 16:19:47.973380] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:56.276 [2024-11-28 16:19:47.973435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:06:56.276 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.276 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:56.276 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.276 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.276 [2024-11-28 16:19:47.981398] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:56.276 [2024-11-28 16:19:47.983301] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:56.276 [2024-11-28 16:19:47.983378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:56.276 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.276 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:56.276 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:56.276 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:56.276 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:56.276 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:56.276 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:56.276 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.276 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:56.276 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.276 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.276 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.276 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.276 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.277 16:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:56.277 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.277 16:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.277 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.277 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.277 "name": "Existed_Raid", 00:06:56.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.277 "strip_size_kb": 64, 00:06:56.277 "state": "configuring", 00:06:56.277 "raid_level": "raid0", 00:06:56.277 "superblock": false, 00:06:56.277 "num_base_bdevs": 2, 00:06:56.277 "num_base_bdevs_discovered": 1, 00:06:56.277 "num_base_bdevs_operational": 2, 00:06:56.277 "base_bdevs_list": [ 00:06:56.277 { 00:06:56.277 "name": "BaseBdev1", 00:06:56.277 "uuid": "9401f316-dd22-4341-99d5-871d97cc27af", 00:06:56.277 "is_configured": true, 00:06:56.277 "data_offset": 0, 00:06:56.277 "data_size": 65536 00:06:56.277 }, 00:06:56.277 { 00:06:56.277 "name": "BaseBdev2", 00:06:56.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.277 "is_configured": false, 00:06:56.277 "data_offset": 0, 00:06:56.277 "data_size": 0 00:06:56.277 } 00:06:56.277 ] 00:06:56.277 }' 00:06:56.277 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.277 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.847 [2024-11-28 16:19:48.420301] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:56.847 [2024-11-28 16:19:48.420444] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:06:56.847 [2024-11-28 16:19:48.420477] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:56.847 [2024-11-28 16:19:48.420847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:56.847 [2024-11-28 16:19:48.421056] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:06:56.847 [2024-11-28 16:19:48.421113] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:06:56.847 [2024-11-28 16:19:48.421405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.847 BaseBdev2 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.847 [ 00:06:56.847 { 00:06:56.847 "name": "BaseBdev2", 00:06:56.847 "aliases": [ 00:06:56.847 "e614104e-72af-434e-b4d9-34d3d2bb1e11" 00:06:56.847 ], 00:06:56.847 "product_name": "Malloc disk", 00:06:56.847 "block_size": 512, 00:06:56.847 "num_blocks": 65536, 00:06:56.847 "uuid": "e614104e-72af-434e-b4d9-34d3d2bb1e11", 00:06:56.847 "assigned_rate_limits": { 00:06:56.847 "rw_ios_per_sec": 0, 00:06:56.847 "rw_mbytes_per_sec": 0, 00:06:56.847 "r_mbytes_per_sec": 0, 00:06:56.847 "w_mbytes_per_sec": 0 00:06:56.847 }, 00:06:56.847 "claimed": true, 00:06:56.847 "claim_type": "exclusive_write", 00:06:56.847 "zoned": false, 00:06:56.847 "supported_io_types": { 00:06:56.847 "read": true, 00:06:56.847 "write": true, 00:06:56.847 "unmap": true, 00:06:56.847 "flush": true, 00:06:56.847 "reset": true, 00:06:56.847 "nvme_admin": false, 00:06:56.847 "nvme_io": false, 00:06:56.847 "nvme_io_md": false, 00:06:56.847 "write_zeroes": true, 00:06:56.847 "zcopy": true, 00:06:56.847 "get_zone_info": false, 00:06:56.847 "zone_management": false, 00:06:56.847 "zone_append": false, 00:06:56.847 "compare": false, 00:06:56.847 "compare_and_write": false, 00:06:56.847 "abort": true, 00:06:56.847 "seek_hole": false, 00:06:56.847 "seek_data": false, 00:06:56.847 "copy": true, 00:06:56.847 "nvme_iov_md": false 00:06:56.847 }, 00:06:56.847 "memory_domains": [ 00:06:56.847 { 00:06:56.847 "dma_device_id": "system", 00:06:56.847 "dma_device_type": 1 00:06:56.847 }, 00:06:56.847 { 00:06:56.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.847 "dma_device_type": 2 00:06:56.847 } 00:06:56.847 ], 00:06:56.847 "driver_specific": {} 00:06:56.847 } 00:06:56.847 ] 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.847 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.847 "name": "Existed_Raid", 00:06:56.847 "uuid": "5682fb73-8b5a-4692-a7df-3453aa733f98", 00:06:56.847 "strip_size_kb": 64, 00:06:56.847 "state": "online", 00:06:56.847 "raid_level": "raid0", 00:06:56.847 "superblock": false, 00:06:56.847 "num_base_bdevs": 2, 00:06:56.847 "num_base_bdevs_discovered": 2, 00:06:56.847 "num_base_bdevs_operational": 2, 00:06:56.847 "base_bdevs_list": [ 00:06:56.847 { 00:06:56.847 "name": "BaseBdev1", 00:06:56.847 "uuid": "9401f316-dd22-4341-99d5-871d97cc27af", 00:06:56.847 "is_configured": true, 00:06:56.847 "data_offset": 0, 00:06:56.847 "data_size": 65536 00:06:56.847 }, 00:06:56.847 { 00:06:56.847 "name": "BaseBdev2", 00:06:56.847 "uuid": "e614104e-72af-434e-b4d9-34d3d2bb1e11", 00:06:56.848 "is_configured": true, 00:06:56.848 "data_offset": 0, 00:06:56.848 "data_size": 65536 00:06:56.848 } 00:06:56.848 ] 00:06:56.848 }' 00:06:56.848 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.848 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.418 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:57.418 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:57.418 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:57.418 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:57.418 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:57.418 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:57.418 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:57.418 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.418 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.418 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:57.418 [2024-11-28 16:19:48.927791] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.418 16:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.418 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:57.418 "name": "Existed_Raid", 00:06:57.418 "aliases": [ 00:06:57.418 "5682fb73-8b5a-4692-a7df-3453aa733f98" 00:06:57.418 ], 00:06:57.418 "product_name": "Raid Volume", 00:06:57.418 "block_size": 512, 00:06:57.418 "num_blocks": 131072, 00:06:57.418 "uuid": "5682fb73-8b5a-4692-a7df-3453aa733f98", 00:06:57.418 "assigned_rate_limits": { 00:06:57.418 "rw_ios_per_sec": 0, 00:06:57.418 "rw_mbytes_per_sec": 0, 00:06:57.418 "r_mbytes_per_sec": 0, 00:06:57.418 "w_mbytes_per_sec": 0 00:06:57.418 }, 00:06:57.418 "claimed": false, 00:06:57.418 "zoned": false, 00:06:57.418 "supported_io_types": { 00:06:57.418 "read": true, 00:06:57.418 "write": true, 00:06:57.418 "unmap": true, 00:06:57.418 "flush": true, 00:06:57.418 "reset": true, 00:06:57.418 "nvme_admin": false, 00:06:57.418 "nvme_io": false, 00:06:57.418 "nvme_io_md": false, 00:06:57.418 "write_zeroes": true, 00:06:57.418 "zcopy": false, 00:06:57.418 "get_zone_info": false, 00:06:57.418 "zone_management": false, 00:06:57.418 "zone_append": false, 00:06:57.418 "compare": false, 00:06:57.418 "compare_and_write": false, 00:06:57.418 "abort": false, 00:06:57.418 "seek_hole": false, 00:06:57.418 "seek_data": false, 00:06:57.418 "copy": false, 00:06:57.418 "nvme_iov_md": false 00:06:57.418 }, 00:06:57.418 "memory_domains": [ 00:06:57.418 { 00:06:57.418 "dma_device_id": "system", 00:06:57.418 "dma_device_type": 1 00:06:57.418 }, 00:06:57.418 { 00:06:57.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.418 "dma_device_type": 2 00:06:57.418 }, 00:06:57.418 { 00:06:57.418 "dma_device_id": "system", 00:06:57.418 "dma_device_type": 1 00:06:57.418 }, 00:06:57.418 { 00:06:57.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.418 "dma_device_type": 2 00:06:57.418 } 00:06:57.418 ], 00:06:57.418 "driver_specific": { 00:06:57.418 "raid": { 00:06:57.418 "uuid": "5682fb73-8b5a-4692-a7df-3453aa733f98", 00:06:57.418 "strip_size_kb": 64, 00:06:57.418 "state": "online", 00:06:57.418 "raid_level": "raid0", 00:06:57.418 "superblock": false, 00:06:57.418 "num_base_bdevs": 2, 00:06:57.418 "num_base_bdevs_discovered": 2, 00:06:57.418 "num_base_bdevs_operational": 2, 00:06:57.418 "base_bdevs_list": [ 00:06:57.418 { 00:06:57.418 "name": "BaseBdev1", 00:06:57.418 "uuid": "9401f316-dd22-4341-99d5-871d97cc27af", 00:06:57.418 "is_configured": true, 00:06:57.418 "data_offset": 0, 00:06:57.418 "data_size": 65536 00:06:57.418 }, 00:06:57.418 { 00:06:57.418 "name": "BaseBdev2", 00:06:57.418 "uuid": "e614104e-72af-434e-b4d9-34d3d2bb1e11", 00:06:57.418 "is_configured": true, 00:06:57.418 "data_offset": 0, 00:06:57.418 "data_size": 65536 00:06:57.418 } 00:06:57.418 ] 00:06:57.418 } 00:06:57.418 } 00:06:57.418 }' 00:06:57.418 16:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:57.418 BaseBdev2' 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.418 [2024-11-28 16:19:49.135176] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:57.418 [2024-11-28 16:19:49.135211] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:57.418 [2024-11-28 16:19:49.135260] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.418 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.678 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.678 "name": "Existed_Raid", 00:06:57.678 "uuid": "5682fb73-8b5a-4692-a7df-3453aa733f98", 00:06:57.678 "strip_size_kb": 64, 00:06:57.678 "state": "offline", 00:06:57.678 "raid_level": "raid0", 00:06:57.678 "superblock": false, 00:06:57.678 "num_base_bdevs": 2, 00:06:57.678 "num_base_bdevs_discovered": 1, 00:06:57.678 "num_base_bdevs_operational": 1, 00:06:57.678 "base_bdevs_list": [ 00:06:57.678 { 00:06:57.678 "name": null, 00:06:57.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.678 "is_configured": false, 00:06:57.678 "data_offset": 0, 00:06:57.678 "data_size": 65536 00:06:57.678 }, 00:06:57.678 { 00:06:57.678 "name": "BaseBdev2", 00:06:57.678 "uuid": "e614104e-72af-434e-b4d9-34d3d2bb1e11", 00:06:57.678 "is_configured": true, 00:06:57.678 "data_offset": 0, 00:06:57.678 "data_size": 65536 00:06:57.678 } 00:06:57.678 ] 00:06:57.678 }' 00:06:57.679 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.679 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.938 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:57.938 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:57.938 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.938 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.938 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.939 [2024-11-28 16:19:49.593663] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:57.939 [2024-11-28 16:19:49.593773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72148 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 72148 ']' 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 72148 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72148 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72148' 00:06:57.939 killing process with pid 72148 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 72148 00:06:57.939 [2024-11-28 16:19:49.679178] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:57.939 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 72148 00:06:57.939 [2024-11-28 16:19:49.680203] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:58.198 16:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:58.198 00:06:58.198 real 0m3.750s 00:06:58.198 user 0m5.879s 00:06:58.198 sys 0m0.720s 00:06:58.198 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.198 16:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.198 ************************************ 00:06:58.198 END TEST raid_state_function_test 00:06:58.198 ************************************ 00:06:58.458 16:19:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:06:58.458 16:19:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:58.458 16:19:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.458 16:19:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:58.458 ************************************ 00:06:58.458 START TEST raid_state_function_test_sb 00:06:58.458 ************************************ 00:06:58.458 16:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:06:58.458 16:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:58.458 16:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:58.458 16:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:58.458 16:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:58.458 16:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:58.458 16:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:58.458 16:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:58.458 16:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:58.458 16:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:58.458 16:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:58.458 16:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:58.458 16:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:58.458 16:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:58.459 16:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:58.459 16:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:58.459 16:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:58.459 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:58.459 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:58.459 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:58.459 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:58.459 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:58.459 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:58.459 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:58.459 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72389 00:06:58.459 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:58.459 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72389' 00:06:58.459 Process raid pid: 72389 00:06:58.459 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72389 00:06:58.459 16:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72389 ']' 00:06:58.459 16:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.459 16:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.459 16:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.459 16:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.459 16:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.459 [2024-11-28 16:19:50.080501] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:58.459 [2024-11-28 16:19:50.080617] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.718 [2024-11-28 16:19:50.240494] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.718 [2024-11-28 16:19:50.284246] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.718 [2024-11-28 16:19:50.325827] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.718 [2024-11-28 16:19:50.325871] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.287 16:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.287 16:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:06:59.287 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:59.287 16:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.287 16:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.287 [2024-11-28 16:19:50.918733] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:59.287 [2024-11-28 16:19:50.918792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:59.287 [2024-11-28 16:19:50.918804] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:59.287 [2024-11-28 16:19:50.918813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:59.287 16:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.287 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:59.287 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.287 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:59.287 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:59.287 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.288 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.288 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.288 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.288 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.288 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.288 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.288 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.288 16:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.288 16:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.288 16:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.288 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.288 "name": "Existed_Raid", 00:06:59.288 "uuid": "99111646-30ee-42fe-917e-3b0fb80464c2", 00:06:59.288 "strip_size_kb": 64, 00:06:59.288 "state": "configuring", 00:06:59.288 "raid_level": "raid0", 00:06:59.288 "superblock": true, 00:06:59.288 "num_base_bdevs": 2, 00:06:59.288 "num_base_bdevs_discovered": 0, 00:06:59.288 "num_base_bdevs_operational": 2, 00:06:59.288 "base_bdevs_list": [ 00:06:59.288 { 00:06:59.288 "name": "BaseBdev1", 00:06:59.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.288 "is_configured": false, 00:06:59.288 "data_offset": 0, 00:06:59.288 "data_size": 0 00:06:59.288 }, 00:06:59.288 { 00:06:59.288 "name": "BaseBdev2", 00:06:59.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.288 "is_configured": false, 00:06:59.288 "data_offset": 0, 00:06:59.288 "data_size": 0 00:06:59.288 } 00:06:59.288 ] 00:06:59.288 }' 00:06:59.288 16:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.288 16:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.857 [2024-11-28 16:19:51.333909] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:59.857 [2024-11-28 16:19:51.334007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.857 [2024-11-28 16:19:51.345938] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:59.857 [2024-11-28 16:19:51.346015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:59.857 [2024-11-28 16:19:51.346041] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:59.857 [2024-11-28 16:19:51.346063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.857 [2024-11-28 16:19:51.366507] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:59.857 BaseBdev1 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.857 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.857 [ 00:06:59.857 { 00:06:59.857 "name": "BaseBdev1", 00:06:59.857 "aliases": [ 00:06:59.857 "8158cc20-43e0-4a6f-92c6-81e0fd3d1ec7" 00:06:59.857 ], 00:06:59.857 "product_name": "Malloc disk", 00:06:59.857 "block_size": 512, 00:06:59.857 "num_blocks": 65536, 00:06:59.857 "uuid": "8158cc20-43e0-4a6f-92c6-81e0fd3d1ec7", 00:06:59.858 "assigned_rate_limits": { 00:06:59.858 "rw_ios_per_sec": 0, 00:06:59.858 "rw_mbytes_per_sec": 0, 00:06:59.858 "r_mbytes_per_sec": 0, 00:06:59.858 "w_mbytes_per_sec": 0 00:06:59.858 }, 00:06:59.858 "claimed": true, 00:06:59.858 "claim_type": "exclusive_write", 00:06:59.858 "zoned": false, 00:06:59.858 "supported_io_types": { 00:06:59.858 "read": true, 00:06:59.858 "write": true, 00:06:59.858 "unmap": true, 00:06:59.858 "flush": true, 00:06:59.858 "reset": true, 00:06:59.858 "nvme_admin": false, 00:06:59.858 "nvme_io": false, 00:06:59.858 "nvme_io_md": false, 00:06:59.858 "write_zeroes": true, 00:06:59.858 "zcopy": true, 00:06:59.858 "get_zone_info": false, 00:06:59.858 "zone_management": false, 00:06:59.858 "zone_append": false, 00:06:59.858 "compare": false, 00:06:59.858 "compare_and_write": false, 00:06:59.858 "abort": true, 00:06:59.858 "seek_hole": false, 00:06:59.858 "seek_data": false, 00:06:59.858 "copy": true, 00:06:59.858 "nvme_iov_md": false 00:06:59.858 }, 00:06:59.858 "memory_domains": [ 00:06:59.858 { 00:06:59.858 "dma_device_id": "system", 00:06:59.858 "dma_device_type": 1 00:06:59.858 }, 00:06:59.858 { 00:06:59.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.858 "dma_device_type": 2 00:06:59.858 } 00:06:59.858 ], 00:06:59.858 "driver_specific": {} 00:06:59.858 } 00:06:59.858 ] 00:06:59.858 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.858 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:59.858 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:59.858 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.858 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:59.858 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:59.858 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.858 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.858 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.858 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.858 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.858 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.858 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.858 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.858 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.858 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:59.858 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.858 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.858 "name": "Existed_Raid", 00:06:59.858 "uuid": "ce25f628-a770-458c-b0a7-c472c7fac13f", 00:06:59.858 "strip_size_kb": 64, 00:06:59.858 "state": "configuring", 00:06:59.858 "raid_level": "raid0", 00:06:59.858 "superblock": true, 00:06:59.858 "num_base_bdevs": 2, 00:06:59.858 "num_base_bdevs_discovered": 1, 00:06:59.858 "num_base_bdevs_operational": 2, 00:06:59.858 "base_bdevs_list": [ 00:06:59.858 { 00:06:59.858 "name": "BaseBdev1", 00:06:59.858 "uuid": "8158cc20-43e0-4a6f-92c6-81e0fd3d1ec7", 00:06:59.858 "is_configured": true, 00:06:59.858 "data_offset": 2048, 00:06:59.858 "data_size": 63488 00:06:59.858 }, 00:06:59.858 { 00:06:59.858 "name": "BaseBdev2", 00:06:59.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.858 "is_configured": false, 00:06:59.858 "data_offset": 0, 00:06:59.858 "data_size": 0 00:06:59.858 } 00:06:59.858 ] 00:06:59.858 }' 00:06:59.858 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.858 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.118 [2024-11-28 16:19:51.809782] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:00.118 [2024-11-28 16:19:51.809900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.118 [2024-11-28 16:19:51.817817] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:00.118 [2024-11-28 16:19:51.819661] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:00.118 [2024-11-28 16:19:51.819758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.118 "name": "Existed_Raid", 00:07:00.118 "uuid": "cb176842-eefa-4c68-be6f-3293137deafc", 00:07:00.118 "strip_size_kb": 64, 00:07:00.118 "state": "configuring", 00:07:00.118 "raid_level": "raid0", 00:07:00.118 "superblock": true, 00:07:00.118 "num_base_bdevs": 2, 00:07:00.118 "num_base_bdevs_discovered": 1, 00:07:00.118 "num_base_bdevs_operational": 2, 00:07:00.118 "base_bdevs_list": [ 00:07:00.118 { 00:07:00.118 "name": "BaseBdev1", 00:07:00.118 "uuid": "8158cc20-43e0-4a6f-92c6-81e0fd3d1ec7", 00:07:00.118 "is_configured": true, 00:07:00.118 "data_offset": 2048, 00:07:00.118 "data_size": 63488 00:07:00.118 }, 00:07:00.118 { 00:07:00.118 "name": "BaseBdev2", 00:07:00.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.118 "is_configured": false, 00:07:00.118 "data_offset": 0, 00:07:00.118 "data_size": 0 00:07:00.118 } 00:07:00.118 ] 00:07:00.118 }' 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.118 16:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.688 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:00.688 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.688 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.688 [2024-11-28 16:19:52.284178] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:00.688 [2024-11-28 16:19:52.284961] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:00.688 [2024-11-28 16:19:52.285051] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:00.688 BaseBdev2 00:07:00.688 [2024-11-28 16:19:52.286024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:00.688 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.688 [2024-11-28 16:19:52.286499] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:00.688 [2024-11-28 16:19:52.286553] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:00.688 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:00.688 [2024-11-28 16:19:52.287019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.688 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:00.688 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:00.688 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:00.688 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:00.688 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:00.688 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:00.688 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.688 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.688 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.688 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:00.688 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.688 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.688 [ 00:07:00.688 { 00:07:00.688 "name": "BaseBdev2", 00:07:00.688 "aliases": [ 00:07:00.688 "15687fdf-2e11-416b-9ba9-fa8c8f8638ca" 00:07:00.688 ], 00:07:00.688 "product_name": "Malloc disk", 00:07:00.688 "block_size": 512, 00:07:00.688 "num_blocks": 65536, 00:07:00.688 "uuid": "15687fdf-2e11-416b-9ba9-fa8c8f8638ca", 00:07:00.688 "assigned_rate_limits": { 00:07:00.688 "rw_ios_per_sec": 0, 00:07:00.688 "rw_mbytes_per_sec": 0, 00:07:00.688 "r_mbytes_per_sec": 0, 00:07:00.688 "w_mbytes_per_sec": 0 00:07:00.688 }, 00:07:00.688 "claimed": true, 00:07:00.688 "claim_type": "exclusive_write", 00:07:00.688 "zoned": false, 00:07:00.688 "supported_io_types": { 00:07:00.688 "read": true, 00:07:00.688 "write": true, 00:07:00.688 "unmap": true, 00:07:00.688 "flush": true, 00:07:00.688 "reset": true, 00:07:00.688 "nvme_admin": false, 00:07:00.688 "nvme_io": false, 00:07:00.688 "nvme_io_md": false, 00:07:00.688 "write_zeroes": true, 00:07:00.689 "zcopy": true, 00:07:00.689 "get_zone_info": false, 00:07:00.689 "zone_management": false, 00:07:00.689 "zone_append": false, 00:07:00.689 "compare": false, 00:07:00.689 "compare_and_write": false, 00:07:00.689 "abort": true, 00:07:00.689 "seek_hole": false, 00:07:00.689 "seek_data": false, 00:07:00.689 "copy": true, 00:07:00.689 "nvme_iov_md": false 00:07:00.689 }, 00:07:00.689 "memory_domains": [ 00:07:00.689 { 00:07:00.689 "dma_device_id": "system", 00:07:00.689 "dma_device_type": 1 00:07:00.689 }, 00:07:00.689 { 00:07:00.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.689 "dma_device_type": 2 00:07:00.689 } 00:07:00.689 ], 00:07:00.689 "driver_specific": {} 00:07:00.689 } 00:07:00.689 ] 00:07:00.689 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.689 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:00.689 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:00.689 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:00.689 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:00.689 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:00.689 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:00.689 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:00.689 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.689 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:00.689 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.689 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.689 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.689 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.689 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:00.689 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.689 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.689 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.689 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.689 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.689 "name": "Existed_Raid", 00:07:00.689 "uuid": "cb176842-eefa-4c68-be6f-3293137deafc", 00:07:00.689 "strip_size_kb": 64, 00:07:00.689 "state": "online", 00:07:00.689 "raid_level": "raid0", 00:07:00.689 "superblock": true, 00:07:00.689 "num_base_bdevs": 2, 00:07:00.689 "num_base_bdevs_discovered": 2, 00:07:00.689 "num_base_bdevs_operational": 2, 00:07:00.689 "base_bdevs_list": [ 00:07:00.689 { 00:07:00.689 "name": "BaseBdev1", 00:07:00.689 "uuid": "8158cc20-43e0-4a6f-92c6-81e0fd3d1ec7", 00:07:00.689 "is_configured": true, 00:07:00.689 "data_offset": 2048, 00:07:00.689 "data_size": 63488 00:07:00.689 }, 00:07:00.689 { 00:07:00.689 "name": "BaseBdev2", 00:07:00.689 "uuid": "15687fdf-2e11-416b-9ba9-fa8c8f8638ca", 00:07:00.689 "is_configured": true, 00:07:00.689 "data_offset": 2048, 00:07:00.689 "data_size": 63488 00:07:00.689 } 00:07:00.689 ] 00:07:00.689 }' 00:07:00.689 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.689 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.261 [2024-11-28 16:19:52.811587] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:01.261 "name": "Existed_Raid", 00:07:01.261 "aliases": [ 00:07:01.261 "cb176842-eefa-4c68-be6f-3293137deafc" 00:07:01.261 ], 00:07:01.261 "product_name": "Raid Volume", 00:07:01.261 "block_size": 512, 00:07:01.261 "num_blocks": 126976, 00:07:01.261 "uuid": "cb176842-eefa-4c68-be6f-3293137deafc", 00:07:01.261 "assigned_rate_limits": { 00:07:01.261 "rw_ios_per_sec": 0, 00:07:01.261 "rw_mbytes_per_sec": 0, 00:07:01.261 "r_mbytes_per_sec": 0, 00:07:01.261 "w_mbytes_per_sec": 0 00:07:01.261 }, 00:07:01.261 "claimed": false, 00:07:01.261 "zoned": false, 00:07:01.261 "supported_io_types": { 00:07:01.261 "read": true, 00:07:01.261 "write": true, 00:07:01.261 "unmap": true, 00:07:01.261 "flush": true, 00:07:01.261 "reset": true, 00:07:01.261 "nvme_admin": false, 00:07:01.261 "nvme_io": false, 00:07:01.261 "nvme_io_md": false, 00:07:01.261 "write_zeroes": true, 00:07:01.261 "zcopy": false, 00:07:01.261 "get_zone_info": false, 00:07:01.261 "zone_management": false, 00:07:01.261 "zone_append": false, 00:07:01.261 "compare": false, 00:07:01.261 "compare_and_write": false, 00:07:01.261 "abort": false, 00:07:01.261 "seek_hole": false, 00:07:01.261 "seek_data": false, 00:07:01.261 "copy": false, 00:07:01.261 "nvme_iov_md": false 00:07:01.261 }, 00:07:01.261 "memory_domains": [ 00:07:01.261 { 00:07:01.261 "dma_device_id": "system", 00:07:01.261 "dma_device_type": 1 00:07:01.261 }, 00:07:01.261 { 00:07:01.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.261 "dma_device_type": 2 00:07:01.261 }, 00:07:01.261 { 00:07:01.261 "dma_device_id": "system", 00:07:01.261 "dma_device_type": 1 00:07:01.261 }, 00:07:01.261 { 00:07:01.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.261 "dma_device_type": 2 00:07:01.261 } 00:07:01.261 ], 00:07:01.261 "driver_specific": { 00:07:01.261 "raid": { 00:07:01.261 "uuid": "cb176842-eefa-4c68-be6f-3293137deafc", 00:07:01.261 "strip_size_kb": 64, 00:07:01.261 "state": "online", 00:07:01.261 "raid_level": "raid0", 00:07:01.261 "superblock": true, 00:07:01.261 "num_base_bdevs": 2, 00:07:01.261 "num_base_bdevs_discovered": 2, 00:07:01.261 "num_base_bdevs_operational": 2, 00:07:01.261 "base_bdevs_list": [ 00:07:01.261 { 00:07:01.261 "name": "BaseBdev1", 00:07:01.261 "uuid": "8158cc20-43e0-4a6f-92c6-81e0fd3d1ec7", 00:07:01.261 "is_configured": true, 00:07:01.261 "data_offset": 2048, 00:07:01.261 "data_size": 63488 00:07:01.261 }, 00:07:01.261 { 00:07:01.261 "name": "BaseBdev2", 00:07:01.261 "uuid": "15687fdf-2e11-416b-9ba9-fa8c8f8638ca", 00:07:01.261 "is_configured": true, 00:07:01.261 "data_offset": 2048, 00:07:01.261 "data_size": 63488 00:07:01.261 } 00:07:01.261 ] 00:07:01.261 } 00:07:01.261 } 00:07:01.261 }' 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:01.261 BaseBdev2' 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:01.261 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:01.262 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:01.262 16:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.262 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.262 16:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.262 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.262 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:01.262 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:01.262 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:01.262 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.262 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.262 [2024-11-28 16:19:53.015001] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:01.262 [2024-11-28 16:19:53.015050] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:01.262 [2024-11-28 16:19:53.015110] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.522 "name": "Existed_Raid", 00:07:01.522 "uuid": "cb176842-eefa-4c68-be6f-3293137deafc", 00:07:01.522 "strip_size_kb": 64, 00:07:01.522 "state": "offline", 00:07:01.522 "raid_level": "raid0", 00:07:01.522 "superblock": true, 00:07:01.522 "num_base_bdevs": 2, 00:07:01.522 "num_base_bdevs_discovered": 1, 00:07:01.522 "num_base_bdevs_operational": 1, 00:07:01.522 "base_bdevs_list": [ 00:07:01.522 { 00:07:01.522 "name": null, 00:07:01.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.522 "is_configured": false, 00:07:01.522 "data_offset": 0, 00:07:01.522 "data_size": 63488 00:07:01.522 }, 00:07:01.522 { 00:07:01.522 "name": "BaseBdev2", 00:07:01.522 "uuid": "15687fdf-2e11-416b-9ba9-fa8c8f8638ca", 00:07:01.522 "is_configured": true, 00:07:01.522 "data_offset": 2048, 00:07:01.522 "data_size": 63488 00:07:01.522 } 00:07:01.522 ] 00:07:01.522 }' 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.522 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.783 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:01.783 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:01.783 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.783 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:01.783 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.783 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.783 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.783 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:01.783 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:01.783 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:01.783 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.783 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.783 [2024-11-28 16:19:53.499338] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:01.783 [2024-11-28 16:19:53.499435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:01.783 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.783 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:01.783 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:01.783 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.783 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.783 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:01.783 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:01.783 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.043 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:02.043 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:02.043 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:02.043 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72389 00:07:02.043 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72389 ']' 00:07:02.043 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72389 00:07:02.043 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:02.043 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.043 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72389 00:07:02.043 killing process with pid 72389 00:07:02.043 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:02.043 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:02.043 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72389' 00:07:02.043 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72389 00:07:02.043 [2024-11-28 16:19:53.602208] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:02.043 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72389 00:07:02.043 [2024-11-28 16:19:53.603827] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:02.304 ************************************ 00:07:02.304 END TEST raid_state_function_test_sb 00:07:02.304 ************************************ 00:07:02.304 16:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:02.304 00:07:02.304 real 0m3.991s 00:07:02.304 user 0m6.177s 00:07:02.304 sys 0m0.731s 00:07:02.304 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.304 16:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.304 16:19:54 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:02.304 16:19:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:02.304 16:19:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.304 16:19:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:02.304 ************************************ 00:07:02.304 START TEST raid_superblock_test 00:07:02.304 ************************************ 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72630 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72630 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72630 ']' 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.304 16:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.566 [2024-11-28 16:19:54.151508] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:02.566 [2024-11-28 16:19:54.151713] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72630 ] 00:07:02.566 [2024-11-28 16:19:54.309760] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.826 [2024-11-28 16:19:54.354961] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.826 [2024-11-28 16:19:54.397404] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.826 [2024-11-28 16:19:54.397518] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.421 16:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.421 16:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:03.421 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:03.421 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:03.421 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:03.421 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:03.421 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:03.421 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:03.421 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:03.421 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:03.421 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:03.421 16:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.421 16:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.421 malloc1 00:07:03.421 16:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.421 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:03.421 16:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.421 16:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.421 [2024-11-28 16:19:54.979673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:03.421 [2024-11-28 16:19:54.979811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:03.421 [2024-11-28 16:19:54.979864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:03.421 [2024-11-28 16:19:54.979905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:03.421 [2024-11-28 16:19:54.982083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:03.421 [2024-11-28 16:19:54.982152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:03.421 pt1 00:07:03.421 16:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.421 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:03.421 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:03.422 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:03.422 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:03.422 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:03.422 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:03.422 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:03.422 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:03.422 16:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:03.422 16:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.422 16:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.422 malloc2 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.422 [2024-11-28 16:19:55.023824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:03.422 [2024-11-28 16:19:55.024045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:03.422 [2024-11-28 16:19:55.024143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:03.422 [2024-11-28 16:19:55.024274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:03.422 [2024-11-28 16:19:55.029147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:03.422 [2024-11-28 16:19:55.029300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:03.422 pt2 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.422 [2024-11-28 16:19:55.037640] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:03.422 [2024-11-28 16:19:55.040605] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:03.422 [2024-11-28 16:19:55.040883] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:03.422 [2024-11-28 16:19:55.040959] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:03.422 [2024-11-28 16:19:55.041384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:03.422 [2024-11-28 16:19:55.041637] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:03.422 [2024-11-28 16:19:55.041700] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:03.422 [2024-11-28 16:19:55.042008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.422 "name": "raid_bdev1", 00:07:03.422 "uuid": "d6317cda-46fc-43a7-8fe9-fcb7c8a4c192", 00:07:03.422 "strip_size_kb": 64, 00:07:03.422 "state": "online", 00:07:03.422 "raid_level": "raid0", 00:07:03.422 "superblock": true, 00:07:03.422 "num_base_bdevs": 2, 00:07:03.422 "num_base_bdevs_discovered": 2, 00:07:03.422 "num_base_bdevs_operational": 2, 00:07:03.422 "base_bdevs_list": [ 00:07:03.422 { 00:07:03.422 "name": "pt1", 00:07:03.422 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:03.422 "is_configured": true, 00:07:03.422 "data_offset": 2048, 00:07:03.422 "data_size": 63488 00:07:03.422 }, 00:07:03.422 { 00:07:03.422 "name": "pt2", 00:07:03.422 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:03.422 "is_configured": true, 00:07:03.422 "data_offset": 2048, 00:07:03.422 "data_size": 63488 00:07:03.422 } 00:07:03.422 ] 00:07:03.422 }' 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.422 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.993 [2024-11-28 16:19:55.477466] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:03.993 "name": "raid_bdev1", 00:07:03.993 "aliases": [ 00:07:03.993 "d6317cda-46fc-43a7-8fe9-fcb7c8a4c192" 00:07:03.993 ], 00:07:03.993 "product_name": "Raid Volume", 00:07:03.993 "block_size": 512, 00:07:03.993 "num_blocks": 126976, 00:07:03.993 "uuid": "d6317cda-46fc-43a7-8fe9-fcb7c8a4c192", 00:07:03.993 "assigned_rate_limits": { 00:07:03.993 "rw_ios_per_sec": 0, 00:07:03.993 "rw_mbytes_per_sec": 0, 00:07:03.993 "r_mbytes_per_sec": 0, 00:07:03.993 "w_mbytes_per_sec": 0 00:07:03.993 }, 00:07:03.993 "claimed": false, 00:07:03.993 "zoned": false, 00:07:03.993 "supported_io_types": { 00:07:03.993 "read": true, 00:07:03.993 "write": true, 00:07:03.993 "unmap": true, 00:07:03.993 "flush": true, 00:07:03.993 "reset": true, 00:07:03.993 "nvme_admin": false, 00:07:03.993 "nvme_io": false, 00:07:03.993 "nvme_io_md": false, 00:07:03.993 "write_zeroes": true, 00:07:03.993 "zcopy": false, 00:07:03.993 "get_zone_info": false, 00:07:03.993 "zone_management": false, 00:07:03.993 "zone_append": false, 00:07:03.993 "compare": false, 00:07:03.993 "compare_and_write": false, 00:07:03.993 "abort": false, 00:07:03.993 "seek_hole": false, 00:07:03.993 "seek_data": false, 00:07:03.993 "copy": false, 00:07:03.993 "nvme_iov_md": false 00:07:03.993 }, 00:07:03.993 "memory_domains": [ 00:07:03.993 { 00:07:03.993 "dma_device_id": "system", 00:07:03.993 "dma_device_type": 1 00:07:03.993 }, 00:07:03.993 { 00:07:03.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.993 "dma_device_type": 2 00:07:03.993 }, 00:07:03.993 { 00:07:03.993 "dma_device_id": "system", 00:07:03.993 "dma_device_type": 1 00:07:03.993 }, 00:07:03.993 { 00:07:03.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.993 "dma_device_type": 2 00:07:03.993 } 00:07:03.993 ], 00:07:03.993 "driver_specific": { 00:07:03.993 "raid": { 00:07:03.993 "uuid": "d6317cda-46fc-43a7-8fe9-fcb7c8a4c192", 00:07:03.993 "strip_size_kb": 64, 00:07:03.993 "state": "online", 00:07:03.993 "raid_level": "raid0", 00:07:03.993 "superblock": true, 00:07:03.993 "num_base_bdevs": 2, 00:07:03.993 "num_base_bdevs_discovered": 2, 00:07:03.993 "num_base_bdevs_operational": 2, 00:07:03.993 "base_bdevs_list": [ 00:07:03.993 { 00:07:03.993 "name": "pt1", 00:07:03.993 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:03.993 "is_configured": true, 00:07:03.993 "data_offset": 2048, 00:07:03.993 "data_size": 63488 00:07:03.993 }, 00:07:03.993 { 00:07:03.993 "name": "pt2", 00:07:03.993 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:03.993 "is_configured": true, 00:07:03.993 "data_offset": 2048, 00:07:03.993 "data_size": 63488 00:07:03.993 } 00:07:03.993 ] 00:07:03.993 } 00:07:03.993 } 00:07:03.993 }' 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:03.993 pt2' 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.993 [2024-11-28 16:19:55.681023] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d6317cda-46fc-43a7-8fe9-fcb7c8a4c192 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d6317cda-46fc-43a7-8fe9-fcb7c8a4c192 ']' 00:07:03.993 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:03.994 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.994 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.994 [2024-11-28 16:19:55.724708] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:03.994 [2024-11-28 16:19:55.724773] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:03.994 [2024-11-28 16:19:55.724910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.994 [2024-11-28 16:19:55.725003] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:03.994 [2024-11-28 16:19:55.725060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:03.994 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.994 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.994 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:03.994 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.994 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.994 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.253 [2024-11-28 16:19:55.848549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:04.253 [2024-11-28 16:19:55.850363] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:04.253 [2024-11-28 16:19:55.850477] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:04.253 [2024-11-28 16:19:55.850564] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:04.253 [2024-11-28 16:19:55.850620] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:04.253 [2024-11-28 16:19:55.850649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:04.253 request: 00:07:04.253 { 00:07:04.253 "name": "raid_bdev1", 00:07:04.253 "raid_level": "raid0", 00:07:04.253 "base_bdevs": [ 00:07:04.253 "malloc1", 00:07:04.253 "malloc2" 00:07:04.253 ], 00:07:04.253 "strip_size_kb": 64, 00:07:04.253 "superblock": false, 00:07:04.253 "method": "bdev_raid_create", 00:07:04.253 "req_id": 1 00:07:04.253 } 00:07:04.253 Got JSON-RPC error response 00:07:04.253 response: 00:07:04.253 { 00:07:04.253 "code": -17, 00:07:04.253 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:04.253 } 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.253 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.253 [2024-11-28 16:19:55.916385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:04.253 [2024-11-28 16:19:55.916469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:04.254 [2024-11-28 16:19:55.916521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:04.254 [2024-11-28 16:19:55.916550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:04.254 [2024-11-28 16:19:55.918641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:04.254 [2024-11-28 16:19:55.918708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:04.254 [2024-11-28 16:19:55.918802] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:04.254 [2024-11-28 16:19:55.918875] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:04.254 pt1 00:07:04.254 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.254 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:04.254 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:04.254 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:04.254 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:04.254 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.254 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:04.254 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.254 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.254 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.254 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.254 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.254 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.254 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.254 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:04.254 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.254 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.254 "name": "raid_bdev1", 00:07:04.254 "uuid": "d6317cda-46fc-43a7-8fe9-fcb7c8a4c192", 00:07:04.254 "strip_size_kb": 64, 00:07:04.254 "state": "configuring", 00:07:04.254 "raid_level": "raid0", 00:07:04.254 "superblock": true, 00:07:04.254 "num_base_bdevs": 2, 00:07:04.254 "num_base_bdevs_discovered": 1, 00:07:04.254 "num_base_bdevs_operational": 2, 00:07:04.254 "base_bdevs_list": [ 00:07:04.254 { 00:07:04.254 "name": "pt1", 00:07:04.254 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:04.254 "is_configured": true, 00:07:04.254 "data_offset": 2048, 00:07:04.254 "data_size": 63488 00:07:04.254 }, 00:07:04.254 { 00:07:04.254 "name": null, 00:07:04.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:04.254 "is_configured": false, 00:07:04.254 "data_offset": 2048, 00:07:04.254 "data_size": 63488 00:07:04.254 } 00:07:04.254 ] 00:07:04.254 }' 00:07:04.254 16:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.254 16:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.826 [2024-11-28 16:19:56.315778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:04.826 [2024-11-28 16:19:56.315883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:04.826 [2024-11-28 16:19:56.315925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:04.826 [2024-11-28 16:19:56.315953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:04.826 [2024-11-28 16:19:56.316350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:04.826 [2024-11-28 16:19:56.316400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:04.826 [2024-11-28 16:19:56.316495] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:04.826 [2024-11-28 16:19:56.316541] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:04.826 [2024-11-28 16:19:56.316642] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:04.826 [2024-11-28 16:19:56.316677] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:04.826 [2024-11-28 16:19:56.316923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:04.826 [2024-11-28 16:19:56.317065] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:04.826 [2024-11-28 16:19:56.317109] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:04.826 [2024-11-28 16:19:56.317238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.826 pt2 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.826 "name": "raid_bdev1", 00:07:04.826 "uuid": "d6317cda-46fc-43a7-8fe9-fcb7c8a4c192", 00:07:04.826 "strip_size_kb": 64, 00:07:04.826 "state": "online", 00:07:04.826 "raid_level": "raid0", 00:07:04.826 "superblock": true, 00:07:04.826 "num_base_bdevs": 2, 00:07:04.826 "num_base_bdevs_discovered": 2, 00:07:04.826 "num_base_bdevs_operational": 2, 00:07:04.826 "base_bdevs_list": [ 00:07:04.826 { 00:07:04.826 "name": "pt1", 00:07:04.826 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:04.826 "is_configured": true, 00:07:04.826 "data_offset": 2048, 00:07:04.826 "data_size": 63488 00:07:04.826 }, 00:07:04.826 { 00:07:04.826 "name": "pt2", 00:07:04.826 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:04.826 "is_configured": true, 00:07:04.826 "data_offset": 2048, 00:07:04.826 "data_size": 63488 00:07:04.826 } 00:07:04.826 ] 00:07:04.826 }' 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.826 16:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.086 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:05.087 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:05.087 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:05.087 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:05.087 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:05.087 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:05.087 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:05.087 16:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.087 16:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.087 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:05.087 [2024-11-28 16:19:56.771238] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.087 16:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.087 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:05.087 "name": "raid_bdev1", 00:07:05.087 "aliases": [ 00:07:05.087 "d6317cda-46fc-43a7-8fe9-fcb7c8a4c192" 00:07:05.087 ], 00:07:05.087 "product_name": "Raid Volume", 00:07:05.087 "block_size": 512, 00:07:05.087 "num_blocks": 126976, 00:07:05.087 "uuid": "d6317cda-46fc-43a7-8fe9-fcb7c8a4c192", 00:07:05.087 "assigned_rate_limits": { 00:07:05.087 "rw_ios_per_sec": 0, 00:07:05.087 "rw_mbytes_per_sec": 0, 00:07:05.087 "r_mbytes_per_sec": 0, 00:07:05.087 "w_mbytes_per_sec": 0 00:07:05.087 }, 00:07:05.087 "claimed": false, 00:07:05.087 "zoned": false, 00:07:05.087 "supported_io_types": { 00:07:05.087 "read": true, 00:07:05.087 "write": true, 00:07:05.087 "unmap": true, 00:07:05.087 "flush": true, 00:07:05.087 "reset": true, 00:07:05.087 "nvme_admin": false, 00:07:05.087 "nvme_io": false, 00:07:05.087 "nvme_io_md": false, 00:07:05.087 "write_zeroes": true, 00:07:05.087 "zcopy": false, 00:07:05.087 "get_zone_info": false, 00:07:05.087 "zone_management": false, 00:07:05.087 "zone_append": false, 00:07:05.087 "compare": false, 00:07:05.087 "compare_and_write": false, 00:07:05.087 "abort": false, 00:07:05.087 "seek_hole": false, 00:07:05.087 "seek_data": false, 00:07:05.087 "copy": false, 00:07:05.087 "nvme_iov_md": false 00:07:05.087 }, 00:07:05.087 "memory_domains": [ 00:07:05.087 { 00:07:05.087 "dma_device_id": "system", 00:07:05.087 "dma_device_type": 1 00:07:05.087 }, 00:07:05.087 { 00:07:05.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.087 "dma_device_type": 2 00:07:05.087 }, 00:07:05.087 { 00:07:05.087 "dma_device_id": "system", 00:07:05.087 "dma_device_type": 1 00:07:05.087 }, 00:07:05.087 { 00:07:05.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.087 "dma_device_type": 2 00:07:05.087 } 00:07:05.087 ], 00:07:05.087 "driver_specific": { 00:07:05.087 "raid": { 00:07:05.087 "uuid": "d6317cda-46fc-43a7-8fe9-fcb7c8a4c192", 00:07:05.087 "strip_size_kb": 64, 00:07:05.087 "state": "online", 00:07:05.087 "raid_level": "raid0", 00:07:05.087 "superblock": true, 00:07:05.087 "num_base_bdevs": 2, 00:07:05.087 "num_base_bdevs_discovered": 2, 00:07:05.087 "num_base_bdevs_operational": 2, 00:07:05.087 "base_bdevs_list": [ 00:07:05.087 { 00:07:05.087 "name": "pt1", 00:07:05.087 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:05.087 "is_configured": true, 00:07:05.087 "data_offset": 2048, 00:07:05.087 "data_size": 63488 00:07:05.087 }, 00:07:05.087 { 00:07:05.087 "name": "pt2", 00:07:05.087 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:05.087 "is_configured": true, 00:07:05.087 "data_offset": 2048, 00:07:05.087 "data_size": 63488 00:07:05.087 } 00:07:05.087 ] 00:07:05.087 } 00:07:05.087 } 00:07:05.087 }' 00:07:05.087 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:05.348 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:05.348 pt2' 00:07:05.348 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:05.348 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:05.348 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:05.348 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:05.348 16:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.348 16:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.348 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:05.348 16:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.348 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:05.348 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:05.348 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:05.348 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:05.348 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:05.348 16:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.348 16:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.348 16:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.348 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:05.348 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:05.348 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:05.348 16:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:05.348 16:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.348 16:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.348 [2024-11-28 16:19:56.994802] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.348 16:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.348 16:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d6317cda-46fc-43a7-8fe9-fcb7c8a4c192 '!=' d6317cda-46fc-43a7-8fe9-fcb7c8a4c192 ']' 00:07:05.348 16:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:05.348 16:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:05.348 16:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:05.348 16:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72630 00:07:05.348 16:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72630 ']' 00:07:05.348 16:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72630 00:07:05.348 16:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:05.348 16:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.348 16:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72630 00:07:05.348 killing process with pid 72630 00:07:05.348 16:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.348 16:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.348 16:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72630' 00:07:05.348 16:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72630 00:07:05.348 [2024-11-28 16:19:57.049364] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:05.348 [2024-11-28 16:19:57.049436] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:05.348 [2024-11-28 16:19:57.049486] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:05.348 [2024-11-28 16:19:57.049496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:05.348 16:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72630 00:07:05.348 [2024-11-28 16:19:57.072657] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:05.608 16:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:05.608 00:07:05.608 real 0m3.264s 00:07:05.608 user 0m4.959s 00:07:05.608 sys 0m0.680s 00:07:05.608 16:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.608 ************************************ 00:07:05.608 END TEST raid_superblock_test 00:07:05.608 ************************************ 00:07:05.608 16:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.608 16:19:57 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:05.608 16:19:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:05.608 16:19:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.608 16:19:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:05.868 ************************************ 00:07:05.868 START TEST raid_read_error_test 00:07:05.868 ************************************ 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.p0fgvAlIuh 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72825 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72825 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 72825 ']' 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.868 16:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.868 [2024-11-28 16:19:57.490827] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:05.868 [2024-11-28 16:19:57.491060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72825 ] 00:07:06.128 [2024-11-28 16:19:57.651588] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.128 [2024-11-28 16:19:57.696210] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.128 [2024-11-28 16:19:57.737916] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.128 [2024-11-28 16:19:57.738028] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.696 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.696 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:06.696 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:06.696 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:06.696 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.696 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.696 BaseBdev1_malloc 00:07:06.696 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.696 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:06.696 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.696 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.696 true 00:07:06.696 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.696 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:06.696 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.696 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.696 [2024-11-28 16:19:58.339746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:06.696 [2024-11-28 16:19:58.339849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:06.696 [2024-11-28 16:19:58.339879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:06.696 [2024-11-28 16:19:58.339889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:06.696 [2024-11-28 16:19:58.341956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:06.696 [2024-11-28 16:19:58.342038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:06.696 BaseBdev1 00:07:06.696 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.696 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:06.696 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:06.696 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.696 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.696 BaseBdev2_malloc 00:07:06.696 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.696 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:06.696 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.697 true 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.697 [2024-11-28 16:19:58.397572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:06.697 [2024-11-28 16:19:58.397709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:06.697 [2024-11-28 16:19:58.397770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:06.697 [2024-11-28 16:19:58.397826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:06.697 [2024-11-28 16:19:58.400880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:06.697 [2024-11-28 16:19:58.400957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:06.697 BaseBdev2 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.697 [2024-11-28 16:19:58.409655] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:06.697 [2024-11-28 16:19:58.411561] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:06.697 [2024-11-28 16:19:58.411781] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:06.697 [2024-11-28 16:19:58.411819] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:06.697 [2024-11-28 16:19:58.412107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:06.697 [2024-11-28 16:19:58.412267] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:06.697 [2024-11-28 16:19:58.412316] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:06.697 [2024-11-28 16:19:58.412478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.697 "name": "raid_bdev1", 00:07:06.697 "uuid": "70ca5a2c-6fa3-4032-b289-9ce51dddfa90", 00:07:06.697 "strip_size_kb": 64, 00:07:06.697 "state": "online", 00:07:06.697 "raid_level": "raid0", 00:07:06.697 "superblock": true, 00:07:06.697 "num_base_bdevs": 2, 00:07:06.697 "num_base_bdevs_discovered": 2, 00:07:06.697 "num_base_bdevs_operational": 2, 00:07:06.697 "base_bdevs_list": [ 00:07:06.697 { 00:07:06.697 "name": "BaseBdev1", 00:07:06.697 "uuid": "ecbde0ba-9ebe-5d30-8828-f79d496f8e51", 00:07:06.697 "is_configured": true, 00:07:06.697 "data_offset": 2048, 00:07:06.697 "data_size": 63488 00:07:06.697 }, 00:07:06.697 { 00:07:06.697 "name": "BaseBdev2", 00:07:06.697 "uuid": "40bbaa21-6275-5eaa-a2d8-6fac9188d187", 00:07:06.697 "is_configured": true, 00:07:06.697 "data_offset": 2048, 00:07:06.697 "data_size": 63488 00:07:06.697 } 00:07:06.697 ] 00:07:06.697 }' 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.697 16:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.267 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:07.267 16:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:07.267 [2024-11-28 16:19:58.937135] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.208 "name": "raid_bdev1", 00:07:08.208 "uuid": "70ca5a2c-6fa3-4032-b289-9ce51dddfa90", 00:07:08.208 "strip_size_kb": 64, 00:07:08.208 "state": "online", 00:07:08.208 "raid_level": "raid0", 00:07:08.208 "superblock": true, 00:07:08.208 "num_base_bdevs": 2, 00:07:08.208 "num_base_bdevs_discovered": 2, 00:07:08.208 "num_base_bdevs_operational": 2, 00:07:08.208 "base_bdevs_list": [ 00:07:08.208 { 00:07:08.208 "name": "BaseBdev1", 00:07:08.208 "uuid": "ecbde0ba-9ebe-5d30-8828-f79d496f8e51", 00:07:08.208 "is_configured": true, 00:07:08.208 "data_offset": 2048, 00:07:08.208 "data_size": 63488 00:07:08.208 }, 00:07:08.208 { 00:07:08.208 "name": "BaseBdev2", 00:07:08.208 "uuid": "40bbaa21-6275-5eaa-a2d8-6fac9188d187", 00:07:08.208 "is_configured": true, 00:07:08.208 "data_offset": 2048, 00:07:08.208 "data_size": 63488 00:07:08.208 } 00:07:08.208 ] 00:07:08.208 }' 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.208 16:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.777 16:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:08.777 16:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.777 16:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.777 [2024-11-28 16:20:00.336939] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:08.777 [2024-11-28 16:20:00.336968] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:08.777 [2024-11-28 16:20:00.339496] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.777 [2024-11-28 16:20:00.339547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.777 [2024-11-28 16:20:00.339578] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:08.777 [2024-11-28 16:20:00.339586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:08.777 { 00:07:08.777 "results": [ 00:07:08.777 { 00:07:08.777 "job": "raid_bdev1", 00:07:08.777 "core_mask": "0x1", 00:07:08.777 "workload": "randrw", 00:07:08.777 "percentage": 50, 00:07:08.777 "status": "finished", 00:07:08.777 "queue_depth": 1, 00:07:08.777 "io_size": 131072, 00:07:08.777 "runtime": 1.400599, 00:07:08.777 "iops": 18202.212053557087, 00:07:08.777 "mibps": 2275.276506694636, 00:07:08.777 "io_failed": 1, 00:07:08.777 "io_timeout": 0, 00:07:08.777 "avg_latency_us": 76.02417160313136, 00:07:08.777 "min_latency_us": 24.370305676855896, 00:07:08.777 "max_latency_us": 1359.3711790393013 00:07:08.777 } 00:07:08.777 ], 00:07:08.777 "core_count": 1 00:07:08.777 } 00:07:08.777 16:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.777 16:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72825 00:07:08.777 16:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 72825 ']' 00:07:08.777 16:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 72825 00:07:08.777 16:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:08.777 16:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.777 16:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72825 00:07:08.777 killing process with pid 72825 00:07:08.777 16:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.777 16:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.777 16:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72825' 00:07:08.777 16:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 72825 00:07:08.778 [2024-11-28 16:20:00.385288] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:08.778 16:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 72825 00:07:08.778 [2024-11-28 16:20:00.400888] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:09.037 16:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.p0fgvAlIuh 00:07:09.037 16:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:09.037 16:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:09.037 16:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:09.037 16:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:09.037 ************************************ 00:07:09.037 END TEST raid_read_error_test 00:07:09.037 ************************************ 00:07:09.037 16:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:09.037 16:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:09.037 16:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:09.037 00:07:09.037 real 0m3.251s 00:07:09.037 user 0m4.105s 00:07:09.037 sys 0m0.516s 00:07:09.037 16:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.037 16:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.037 16:20:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:09.037 16:20:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:09.037 16:20:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.037 16:20:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:09.037 ************************************ 00:07:09.037 START TEST raid_write_error_test 00:07:09.037 ************************************ 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.POGCKAAO6D 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72954 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72954 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 72954 ']' 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.037 16:20:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.297 [2024-11-28 16:20:00.810426] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:09.297 [2024-11-28 16:20:00.810624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72954 ] 00:07:09.297 [2024-11-28 16:20:00.964091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.297 [2024-11-28 16:20:01.009698] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.297 [2024-11-28 16:20:01.052431] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.297 [2024-11-28 16:20:01.052517] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.866 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.866 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:09.866 16:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:09.866 16:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:09.866 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.866 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.127 BaseBdev1_malloc 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.127 true 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.127 [2024-11-28 16:20:01.666870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:10.127 [2024-11-28 16:20:01.666963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.127 [2024-11-28 16:20:01.666999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:10.127 [2024-11-28 16:20:01.667010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.127 [2024-11-28 16:20:01.669229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.127 [2024-11-28 16:20:01.669307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:10.127 BaseBdev1 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.127 BaseBdev2_malloc 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.127 true 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.127 [2024-11-28 16:20:01.717785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:10.127 [2024-11-28 16:20:01.717895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.127 [2024-11-28 16:20:01.717916] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:10.127 [2024-11-28 16:20:01.717924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.127 [2024-11-28 16:20:01.719925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.127 [2024-11-28 16:20:01.719959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:10.127 BaseBdev2 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.127 [2024-11-28 16:20:01.729803] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:10.127 [2024-11-28 16:20:01.731636] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:10.127 [2024-11-28 16:20:01.731866] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:10.127 [2024-11-28 16:20:01.731912] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:10.127 [2024-11-28 16:20:01.732181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:10.127 [2024-11-28 16:20:01.732340] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:10.127 [2024-11-28 16:20:01.732382] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:10.127 [2024-11-28 16:20:01.732536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.127 16:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.127 "name": "raid_bdev1", 00:07:10.127 "uuid": "07022382-cc8d-4785-a202-b1ecfd1c4361", 00:07:10.127 "strip_size_kb": 64, 00:07:10.127 "state": "online", 00:07:10.127 "raid_level": "raid0", 00:07:10.127 "superblock": true, 00:07:10.127 "num_base_bdevs": 2, 00:07:10.127 "num_base_bdevs_discovered": 2, 00:07:10.127 "num_base_bdevs_operational": 2, 00:07:10.127 "base_bdevs_list": [ 00:07:10.127 { 00:07:10.127 "name": "BaseBdev1", 00:07:10.127 "uuid": "fede6334-1201-51ea-85f0-3b1695b90acc", 00:07:10.127 "is_configured": true, 00:07:10.127 "data_offset": 2048, 00:07:10.127 "data_size": 63488 00:07:10.128 }, 00:07:10.128 { 00:07:10.128 "name": "BaseBdev2", 00:07:10.128 "uuid": "7360b7cb-ef27-58d0-a8d3-b08a9aca84c1", 00:07:10.128 "is_configured": true, 00:07:10.128 "data_offset": 2048, 00:07:10.128 "data_size": 63488 00:07:10.128 } 00:07:10.128 ] 00:07:10.128 }' 00:07:10.128 16:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.128 16:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.388 16:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:10.388 16:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:10.648 [2024-11-28 16:20:02.245267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.589 "name": "raid_bdev1", 00:07:11.589 "uuid": "07022382-cc8d-4785-a202-b1ecfd1c4361", 00:07:11.589 "strip_size_kb": 64, 00:07:11.589 "state": "online", 00:07:11.589 "raid_level": "raid0", 00:07:11.589 "superblock": true, 00:07:11.589 "num_base_bdevs": 2, 00:07:11.589 "num_base_bdevs_discovered": 2, 00:07:11.589 "num_base_bdevs_operational": 2, 00:07:11.589 "base_bdevs_list": [ 00:07:11.589 { 00:07:11.589 "name": "BaseBdev1", 00:07:11.589 "uuid": "fede6334-1201-51ea-85f0-3b1695b90acc", 00:07:11.589 "is_configured": true, 00:07:11.589 "data_offset": 2048, 00:07:11.589 "data_size": 63488 00:07:11.589 }, 00:07:11.589 { 00:07:11.589 "name": "BaseBdev2", 00:07:11.589 "uuid": "7360b7cb-ef27-58d0-a8d3-b08a9aca84c1", 00:07:11.589 "is_configured": true, 00:07:11.589 "data_offset": 2048, 00:07:11.589 "data_size": 63488 00:07:11.589 } 00:07:11.589 ] 00:07:11.589 }' 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.589 16:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.849 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:11.849 16:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.849 16:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.849 [2024-11-28 16:20:03.584801] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:11.849 [2024-11-28 16:20:03.584841] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:11.849 [2024-11-28 16:20:03.587220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:11.849 [2024-11-28 16:20:03.587262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:11.849 [2024-11-28 16:20:03.587293] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:11.849 [2024-11-28 16:20:03.587302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:11.849 { 00:07:11.849 "results": [ 00:07:11.849 { 00:07:11.849 "job": "raid_bdev1", 00:07:11.849 "core_mask": "0x1", 00:07:11.849 "workload": "randrw", 00:07:11.849 "percentage": 50, 00:07:11.849 "status": "finished", 00:07:11.849 "queue_depth": 1, 00:07:11.849 "io_size": 131072, 00:07:11.849 "runtime": 1.340515, 00:07:11.849 "iops": 17975.927162321943, 00:07:11.849 "mibps": 2246.990895290243, 00:07:11.849 "io_failed": 1, 00:07:11.849 "io_timeout": 0, 00:07:11.849 "avg_latency_us": 77.05430945908283, 00:07:11.849 "min_latency_us": 24.482096069868994, 00:07:11.849 "max_latency_us": 1380.8349344978167 00:07:11.849 } 00:07:11.849 ], 00:07:11.849 "core_count": 1 00:07:11.849 } 00:07:11.849 16:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.849 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72954 00:07:11.849 16:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 72954 ']' 00:07:11.849 16:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 72954 00:07:11.849 16:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:11.849 16:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.849 16:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72954 00:07:12.109 killing process with pid 72954 00:07:12.109 16:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.109 16:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.109 16:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72954' 00:07:12.109 16:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 72954 00:07:12.109 [2024-11-28 16:20:03.633900] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:12.109 16:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 72954 00:07:12.109 [2024-11-28 16:20:03.649326] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:12.370 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.POGCKAAO6D 00:07:12.370 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:12.370 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:12.370 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:12.370 ************************************ 00:07:12.370 END TEST raid_write_error_test 00:07:12.370 ************************************ 00:07:12.370 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:12.370 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:12.370 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:12.370 16:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:12.370 00:07:12.370 real 0m3.179s 00:07:12.370 user 0m4.006s 00:07:12.370 sys 0m0.508s 00:07:12.370 16:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.370 16:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.370 16:20:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:12.370 16:20:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:12.370 16:20:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:12.370 16:20:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.370 16:20:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:12.370 ************************************ 00:07:12.370 START TEST raid_state_function_test 00:07:12.370 ************************************ 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73081 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73081' 00:07:12.370 Process raid pid: 73081 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73081 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73081 ']' 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.370 16:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.370 [2024-11-28 16:20:04.052493] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:12.370 [2024-11-28 16:20:04.052696] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.631 [2024-11-28 16:20:04.213381] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.631 [2024-11-28 16:20:04.257437] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.631 [2024-11-28 16:20:04.299063] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.631 [2024-11-28 16:20:04.299175] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.201 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.201 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:13.201 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:13.201 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.201 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.201 [2024-11-28 16:20:04.872235] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:13.201 [2024-11-28 16:20:04.872331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:13.201 [2024-11-28 16:20:04.872363] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:13.201 [2024-11-28 16:20:04.872386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:13.201 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.201 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:13.201 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:13.201 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:13.201 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:13.201 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:13.201 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:13.201 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.201 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.201 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.201 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.201 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.201 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.202 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.202 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.202 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.202 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.202 "name": "Existed_Raid", 00:07:13.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.202 "strip_size_kb": 64, 00:07:13.202 "state": "configuring", 00:07:13.202 "raid_level": "concat", 00:07:13.202 "superblock": false, 00:07:13.202 "num_base_bdevs": 2, 00:07:13.202 "num_base_bdevs_discovered": 0, 00:07:13.202 "num_base_bdevs_operational": 2, 00:07:13.202 "base_bdevs_list": [ 00:07:13.202 { 00:07:13.202 "name": "BaseBdev1", 00:07:13.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.202 "is_configured": false, 00:07:13.202 "data_offset": 0, 00:07:13.202 "data_size": 0 00:07:13.202 }, 00:07:13.202 { 00:07:13.202 "name": "BaseBdev2", 00:07:13.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.202 "is_configured": false, 00:07:13.202 "data_offset": 0, 00:07:13.202 "data_size": 0 00:07:13.202 } 00:07:13.202 ] 00:07:13.202 }' 00:07:13.202 16:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.202 16:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.770 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:13.770 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.770 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.770 [2024-11-28 16:20:05.291447] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:13.770 [2024-11-28 16:20:05.291489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:13.770 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.770 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:13.770 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.770 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.770 [2024-11-28 16:20:05.303451] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:13.770 [2024-11-28 16:20:05.303531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:13.771 [2024-11-28 16:20:05.303557] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:13.771 [2024-11-28 16:20:05.303579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.771 [2024-11-28 16:20:05.324172] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:13.771 BaseBdev1 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.771 [ 00:07:13.771 { 00:07:13.771 "name": "BaseBdev1", 00:07:13.771 "aliases": [ 00:07:13.771 "6ed3ee00-0d16-4418-8c86-562aef3cef17" 00:07:13.771 ], 00:07:13.771 "product_name": "Malloc disk", 00:07:13.771 "block_size": 512, 00:07:13.771 "num_blocks": 65536, 00:07:13.771 "uuid": "6ed3ee00-0d16-4418-8c86-562aef3cef17", 00:07:13.771 "assigned_rate_limits": { 00:07:13.771 "rw_ios_per_sec": 0, 00:07:13.771 "rw_mbytes_per_sec": 0, 00:07:13.771 "r_mbytes_per_sec": 0, 00:07:13.771 "w_mbytes_per_sec": 0 00:07:13.771 }, 00:07:13.771 "claimed": true, 00:07:13.771 "claim_type": "exclusive_write", 00:07:13.771 "zoned": false, 00:07:13.771 "supported_io_types": { 00:07:13.771 "read": true, 00:07:13.771 "write": true, 00:07:13.771 "unmap": true, 00:07:13.771 "flush": true, 00:07:13.771 "reset": true, 00:07:13.771 "nvme_admin": false, 00:07:13.771 "nvme_io": false, 00:07:13.771 "nvme_io_md": false, 00:07:13.771 "write_zeroes": true, 00:07:13.771 "zcopy": true, 00:07:13.771 "get_zone_info": false, 00:07:13.771 "zone_management": false, 00:07:13.771 "zone_append": false, 00:07:13.771 "compare": false, 00:07:13.771 "compare_and_write": false, 00:07:13.771 "abort": true, 00:07:13.771 "seek_hole": false, 00:07:13.771 "seek_data": false, 00:07:13.771 "copy": true, 00:07:13.771 "nvme_iov_md": false 00:07:13.771 }, 00:07:13.771 "memory_domains": [ 00:07:13.771 { 00:07:13.771 "dma_device_id": "system", 00:07:13.771 "dma_device_type": 1 00:07:13.771 }, 00:07:13.771 { 00:07:13.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.771 "dma_device_type": 2 00:07:13.771 } 00:07:13.771 ], 00:07:13.771 "driver_specific": {} 00:07:13.771 } 00:07:13.771 ] 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.771 "name": "Existed_Raid", 00:07:13.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.771 "strip_size_kb": 64, 00:07:13.771 "state": "configuring", 00:07:13.771 "raid_level": "concat", 00:07:13.771 "superblock": false, 00:07:13.771 "num_base_bdevs": 2, 00:07:13.771 "num_base_bdevs_discovered": 1, 00:07:13.771 "num_base_bdevs_operational": 2, 00:07:13.771 "base_bdevs_list": [ 00:07:13.771 { 00:07:13.771 "name": "BaseBdev1", 00:07:13.771 "uuid": "6ed3ee00-0d16-4418-8c86-562aef3cef17", 00:07:13.771 "is_configured": true, 00:07:13.771 "data_offset": 0, 00:07:13.771 "data_size": 65536 00:07:13.771 }, 00:07:13.771 { 00:07:13.771 "name": "BaseBdev2", 00:07:13.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.771 "is_configured": false, 00:07:13.771 "data_offset": 0, 00:07:13.771 "data_size": 0 00:07:13.771 } 00:07:13.771 ] 00:07:13.771 }' 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.771 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.031 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:14.031 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.031 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.031 [2024-11-28 16:20:05.779426] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:14.031 [2024-11-28 16:20:05.779500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:14.031 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.031 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:14.031 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.031 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.031 [2024-11-28 16:20:05.791443] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:14.031 [2024-11-28 16:20:05.793267] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.031 [2024-11-28 16:20:05.793336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.031 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.031 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:14.031 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:14.031 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:14.031 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.031 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:14.031 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:14.031 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.031 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.031 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.031 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.031 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.031 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.291 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.291 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.291 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.291 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.291 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.291 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.291 "name": "Existed_Raid", 00:07:14.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.291 "strip_size_kb": 64, 00:07:14.291 "state": "configuring", 00:07:14.291 "raid_level": "concat", 00:07:14.291 "superblock": false, 00:07:14.291 "num_base_bdevs": 2, 00:07:14.291 "num_base_bdevs_discovered": 1, 00:07:14.291 "num_base_bdevs_operational": 2, 00:07:14.291 "base_bdevs_list": [ 00:07:14.291 { 00:07:14.291 "name": "BaseBdev1", 00:07:14.291 "uuid": "6ed3ee00-0d16-4418-8c86-562aef3cef17", 00:07:14.291 "is_configured": true, 00:07:14.291 "data_offset": 0, 00:07:14.291 "data_size": 65536 00:07:14.291 }, 00:07:14.291 { 00:07:14.291 "name": "BaseBdev2", 00:07:14.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.291 "is_configured": false, 00:07:14.291 "data_offset": 0, 00:07:14.291 "data_size": 0 00:07:14.291 } 00:07:14.291 ] 00:07:14.291 }' 00:07:14.291 16:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.291 16:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.551 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:14.551 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.551 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.551 [2024-11-28 16:20:06.270565] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:14.551 [2024-11-28 16:20:06.270916] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:14.551 [2024-11-28 16:20:06.271064] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:14.551 [2024-11-28 16:20:06.272195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:14.551 [2024-11-28 16:20:06.272693] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:14.551 [2024-11-28 16:20:06.272814] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:14.551 [2024-11-28 16:20:06.273459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.551 BaseBdev2 00:07:14.551 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.552 [ 00:07:14.552 { 00:07:14.552 "name": "BaseBdev2", 00:07:14.552 "aliases": [ 00:07:14.552 "9c2998df-636f-49c7-b89f-2fffdf913c22" 00:07:14.552 ], 00:07:14.552 "product_name": "Malloc disk", 00:07:14.552 "block_size": 512, 00:07:14.552 "num_blocks": 65536, 00:07:14.552 "uuid": "9c2998df-636f-49c7-b89f-2fffdf913c22", 00:07:14.552 "assigned_rate_limits": { 00:07:14.552 "rw_ios_per_sec": 0, 00:07:14.552 "rw_mbytes_per_sec": 0, 00:07:14.552 "r_mbytes_per_sec": 0, 00:07:14.552 "w_mbytes_per_sec": 0 00:07:14.552 }, 00:07:14.552 "claimed": true, 00:07:14.552 "claim_type": "exclusive_write", 00:07:14.552 "zoned": false, 00:07:14.552 "supported_io_types": { 00:07:14.552 "read": true, 00:07:14.552 "write": true, 00:07:14.552 "unmap": true, 00:07:14.552 "flush": true, 00:07:14.552 "reset": true, 00:07:14.552 "nvme_admin": false, 00:07:14.552 "nvme_io": false, 00:07:14.552 "nvme_io_md": false, 00:07:14.552 "write_zeroes": true, 00:07:14.552 "zcopy": true, 00:07:14.552 "get_zone_info": false, 00:07:14.552 "zone_management": false, 00:07:14.552 "zone_append": false, 00:07:14.552 "compare": false, 00:07:14.552 "compare_and_write": false, 00:07:14.552 "abort": true, 00:07:14.552 "seek_hole": false, 00:07:14.552 "seek_data": false, 00:07:14.552 "copy": true, 00:07:14.552 "nvme_iov_md": false 00:07:14.552 }, 00:07:14.552 "memory_domains": [ 00:07:14.552 { 00:07:14.552 "dma_device_id": "system", 00:07:14.552 "dma_device_type": 1 00:07:14.552 }, 00:07:14.552 { 00:07:14.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.552 "dma_device_type": 2 00:07:14.552 } 00:07:14.552 ], 00:07:14.552 "driver_specific": {} 00:07:14.552 } 00:07:14.552 ] 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.552 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.822 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.822 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.822 "name": "Existed_Raid", 00:07:14.822 "uuid": "ba56f861-2d7c-4ff3-9fde-5891edb2e96d", 00:07:14.822 "strip_size_kb": 64, 00:07:14.822 "state": "online", 00:07:14.822 "raid_level": "concat", 00:07:14.822 "superblock": false, 00:07:14.822 "num_base_bdevs": 2, 00:07:14.822 "num_base_bdevs_discovered": 2, 00:07:14.822 "num_base_bdevs_operational": 2, 00:07:14.822 "base_bdevs_list": [ 00:07:14.822 { 00:07:14.822 "name": "BaseBdev1", 00:07:14.822 "uuid": "6ed3ee00-0d16-4418-8c86-562aef3cef17", 00:07:14.822 "is_configured": true, 00:07:14.822 "data_offset": 0, 00:07:14.822 "data_size": 65536 00:07:14.822 }, 00:07:14.822 { 00:07:14.822 "name": "BaseBdev2", 00:07:14.822 "uuid": "9c2998df-636f-49c7-b89f-2fffdf913c22", 00:07:14.822 "is_configured": true, 00:07:14.822 "data_offset": 0, 00:07:14.822 "data_size": 65536 00:07:14.822 } 00:07:14.822 ] 00:07:14.822 }' 00:07:14.822 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.822 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.099 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:15.099 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:15.099 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:15.099 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:15.099 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:15.099 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:15.099 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:15.099 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:15.099 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.099 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.099 [2024-11-28 16:20:06.745985] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.099 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.099 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:15.099 "name": "Existed_Raid", 00:07:15.099 "aliases": [ 00:07:15.099 "ba56f861-2d7c-4ff3-9fde-5891edb2e96d" 00:07:15.099 ], 00:07:15.099 "product_name": "Raid Volume", 00:07:15.099 "block_size": 512, 00:07:15.099 "num_blocks": 131072, 00:07:15.099 "uuid": "ba56f861-2d7c-4ff3-9fde-5891edb2e96d", 00:07:15.099 "assigned_rate_limits": { 00:07:15.099 "rw_ios_per_sec": 0, 00:07:15.099 "rw_mbytes_per_sec": 0, 00:07:15.099 "r_mbytes_per_sec": 0, 00:07:15.099 "w_mbytes_per_sec": 0 00:07:15.099 }, 00:07:15.099 "claimed": false, 00:07:15.099 "zoned": false, 00:07:15.099 "supported_io_types": { 00:07:15.099 "read": true, 00:07:15.099 "write": true, 00:07:15.099 "unmap": true, 00:07:15.099 "flush": true, 00:07:15.099 "reset": true, 00:07:15.099 "nvme_admin": false, 00:07:15.099 "nvme_io": false, 00:07:15.099 "nvme_io_md": false, 00:07:15.099 "write_zeroes": true, 00:07:15.099 "zcopy": false, 00:07:15.099 "get_zone_info": false, 00:07:15.099 "zone_management": false, 00:07:15.099 "zone_append": false, 00:07:15.099 "compare": false, 00:07:15.099 "compare_and_write": false, 00:07:15.099 "abort": false, 00:07:15.099 "seek_hole": false, 00:07:15.099 "seek_data": false, 00:07:15.099 "copy": false, 00:07:15.099 "nvme_iov_md": false 00:07:15.099 }, 00:07:15.099 "memory_domains": [ 00:07:15.099 { 00:07:15.099 "dma_device_id": "system", 00:07:15.099 "dma_device_type": 1 00:07:15.099 }, 00:07:15.099 { 00:07:15.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.099 "dma_device_type": 2 00:07:15.099 }, 00:07:15.099 { 00:07:15.099 "dma_device_id": "system", 00:07:15.099 "dma_device_type": 1 00:07:15.099 }, 00:07:15.099 { 00:07:15.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.099 "dma_device_type": 2 00:07:15.099 } 00:07:15.099 ], 00:07:15.099 "driver_specific": { 00:07:15.099 "raid": { 00:07:15.099 "uuid": "ba56f861-2d7c-4ff3-9fde-5891edb2e96d", 00:07:15.099 "strip_size_kb": 64, 00:07:15.099 "state": "online", 00:07:15.099 "raid_level": "concat", 00:07:15.099 "superblock": false, 00:07:15.099 "num_base_bdevs": 2, 00:07:15.099 "num_base_bdevs_discovered": 2, 00:07:15.099 "num_base_bdevs_operational": 2, 00:07:15.099 "base_bdevs_list": [ 00:07:15.099 { 00:07:15.099 "name": "BaseBdev1", 00:07:15.099 "uuid": "6ed3ee00-0d16-4418-8c86-562aef3cef17", 00:07:15.099 "is_configured": true, 00:07:15.099 "data_offset": 0, 00:07:15.099 "data_size": 65536 00:07:15.099 }, 00:07:15.100 { 00:07:15.100 "name": "BaseBdev2", 00:07:15.100 "uuid": "9c2998df-636f-49c7-b89f-2fffdf913c22", 00:07:15.100 "is_configured": true, 00:07:15.100 "data_offset": 0, 00:07:15.100 "data_size": 65536 00:07:15.100 } 00:07:15.100 ] 00:07:15.100 } 00:07:15.100 } 00:07:15.100 }' 00:07:15.100 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:15.100 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:15.100 BaseBdev2' 00:07:15.100 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.100 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:15.100 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:15.100 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:15.100 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.100 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.100 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.100 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.359 [2024-11-28 16:20:06.957421] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:15.359 [2024-11-28 16:20:06.957492] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:15.359 [2024-11-28 16:20:06.957566] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.359 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.360 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.360 16:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.360 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.360 "name": "Existed_Raid", 00:07:15.360 "uuid": "ba56f861-2d7c-4ff3-9fde-5891edb2e96d", 00:07:15.360 "strip_size_kb": 64, 00:07:15.360 "state": "offline", 00:07:15.360 "raid_level": "concat", 00:07:15.360 "superblock": false, 00:07:15.360 "num_base_bdevs": 2, 00:07:15.360 "num_base_bdevs_discovered": 1, 00:07:15.360 "num_base_bdevs_operational": 1, 00:07:15.360 "base_bdevs_list": [ 00:07:15.360 { 00:07:15.360 "name": null, 00:07:15.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.360 "is_configured": false, 00:07:15.360 "data_offset": 0, 00:07:15.360 "data_size": 65536 00:07:15.360 }, 00:07:15.360 { 00:07:15.360 "name": "BaseBdev2", 00:07:15.360 "uuid": "9c2998df-636f-49c7-b89f-2fffdf913c22", 00:07:15.360 "is_configured": true, 00:07:15.360 "data_offset": 0, 00:07:15.360 "data_size": 65536 00:07:15.360 } 00:07:15.360 ] 00:07:15.360 }' 00:07:15.360 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.360 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.619 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:15.619 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:15.619 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:15.619 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.619 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.619 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.879 [2024-11-28 16:20:07.407922] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:15.879 [2024-11-28 16:20:07.408018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73081 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73081 ']' 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73081 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73081 00:07:15.879 killing process with pid 73081 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73081' 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73081 00:07:15.879 [2024-11-28 16:20:07.482777] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:15.879 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73081 00:07:15.879 [2024-11-28 16:20:07.483741] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:16.139 00:07:16.139 real 0m3.760s 00:07:16.139 user 0m5.893s 00:07:16.139 sys 0m0.738s 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.139 ************************************ 00:07:16.139 END TEST raid_state_function_test 00:07:16.139 ************************************ 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.139 16:20:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:16.139 16:20:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:16.139 16:20:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.139 16:20:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:16.139 ************************************ 00:07:16.139 START TEST raid_state_function_test_sb 00:07:16.139 ************************************ 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73323 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73323' 00:07:16.139 Process raid pid: 73323 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73323 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73323 ']' 00:07:16.139 16:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.140 16:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.140 16:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.140 16:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.140 16:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.140 [2024-11-28 16:20:07.882612] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:16.140 [2024-11-28 16:20:07.882801] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.399 [2024-11-28 16:20:08.041812] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.399 [2024-11-28 16:20:08.086197] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.399 [2024-11-28 16:20:08.128110] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.399 [2024-11-28 16:20:08.128145] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.968 16:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.968 16:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:16.968 16:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:16.968 16:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.968 16:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.968 [2024-11-28 16:20:08.721257] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:16.968 [2024-11-28 16:20:08.721357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:16.968 [2024-11-28 16:20:08.721400] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:16.968 [2024-11-28 16:20:08.721424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:16.968 16:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.968 16:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:16.968 16:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.968 16:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:16.968 16:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:16.968 16:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.968 16:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.968 16:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.968 16:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.968 16:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.968 16:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.968 16:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.968 16:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.968 16:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.968 16:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.228 16:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.228 16:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.228 "name": "Existed_Raid", 00:07:17.228 "uuid": "c5392c66-3fcc-477e-b671-8329529af469", 00:07:17.228 "strip_size_kb": 64, 00:07:17.228 "state": "configuring", 00:07:17.228 "raid_level": "concat", 00:07:17.228 "superblock": true, 00:07:17.228 "num_base_bdevs": 2, 00:07:17.228 "num_base_bdevs_discovered": 0, 00:07:17.228 "num_base_bdevs_operational": 2, 00:07:17.228 "base_bdevs_list": [ 00:07:17.228 { 00:07:17.228 "name": "BaseBdev1", 00:07:17.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.228 "is_configured": false, 00:07:17.228 "data_offset": 0, 00:07:17.228 "data_size": 0 00:07:17.228 }, 00:07:17.228 { 00:07:17.228 "name": "BaseBdev2", 00:07:17.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.228 "is_configured": false, 00:07:17.228 "data_offset": 0, 00:07:17.228 "data_size": 0 00:07:17.228 } 00:07:17.228 ] 00:07:17.228 }' 00:07:17.228 16:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.228 16:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.488 [2024-11-28 16:20:09.120497] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:17.488 [2024-11-28 16:20:09.120585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.488 [2024-11-28 16:20:09.132498] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:17.488 [2024-11-28 16:20:09.132570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:17.488 [2024-11-28 16:20:09.132595] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:17.488 [2024-11-28 16:20:09.132617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.488 [2024-11-28 16:20:09.153212] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:17.488 BaseBdev1 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.488 [ 00:07:17.488 { 00:07:17.488 "name": "BaseBdev1", 00:07:17.488 "aliases": [ 00:07:17.488 "5a6a53e5-94b3-42e5-b33a-8e9b34ce3fba" 00:07:17.488 ], 00:07:17.488 "product_name": "Malloc disk", 00:07:17.488 "block_size": 512, 00:07:17.488 "num_blocks": 65536, 00:07:17.488 "uuid": "5a6a53e5-94b3-42e5-b33a-8e9b34ce3fba", 00:07:17.488 "assigned_rate_limits": { 00:07:17.488 "rw_ios_per_sec": 0, 00:07:17.488 "rw_mbytes_per_sec": 0, 00:07:17.488 "r_mbytes_per_sec": 0, 00:07:17.488 "w_mbytes_per_sec": 0 00:07:17.488 }, 00:07:17.488 "claimed": true, 00:07:17.488 "claim_type": "exclusive_write", 00:07:17.488 "zoned": false, 00:07:17.488 "supported_io_types": { 00:07:17.488 "read": true, 00:07:17.488 "write": true, 00:07:17.488 "unmap": true, 00:07:17.488 "flush": true, 00:07:17.488 "reset": true, 00:07:17.488 "nvme_admin": false, 00:07:17.488 "nvme_io": false, 00:07:17.488 "nvme_io_md": false, 00:07:17.488 "write_zeroes": true, 00:07:17.488 "zcopy": true, 00:07:17.488 "get_zone_info": false, 00:07:17.488 "zone_management": false, 00:07:17.488 "zone_append": false, 00:07:17.488 "compare": false, 00:07:17.488 "compare_and_write": false, 00:07:17.488 "abort": true, 00:07:17.488 "seek_hole": false, 00:07:17.488 "seek_data": false, 00:07:17.488 "copy": true, 00:07:17.488 "nvme_iov_md": false 00:07:17.488 }, 00:07:17.488 "memory_domains": [ 00:07:17.488 { 00:07:17.488 "dma_device_id": "system", 00:07:17.488 "dma_device_type": 1 00:07:17.488 }, 00:07:17.488 { 00:07:17.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.488 "dma_device_type": 2 00:07:17.488 } 00:07:17.488 ], 00:07:17.488 "driver_specific": {} 00:07:17.488 } 00:07:17.488 ] 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.488 "name": "Existed_Raid", 00:07:17.488 "uuid": "975a4a98-5cc7-45cd-a228-758dcb1d0bbf", 00:07:17.488 "strip_size_kb": 64, 00:07:17.488 "state": "configuring", 00:07:17.488 "raid_level": "concat", 00:07:17.488 "superblock": true, 00:07:17.488 "num_base_bdevs": 2, 00:07:17.488 "num_base_bdevs_discovered": 1, 00:07:17.488 "num_base_bdevs_operational": 2, 00:07:17.488 "base_bdevs_list": [ 00:07:17.488 { 00:07:17.488 "name": "BaseBdev1", 00:07:17.488 "uuid": "5a6a53e5-94b3-42e5-b33a-8e9b34ce3fba", 00:07:17.488 "is_configured": true, 00:07:17.488 "data_offset": 2048, 00:07:17.488 "data_size": 63488 00:07:17.488 }, 00:07:17.488 { 00:07:17.488 "name": "BaseBdev2", 00:07:17.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.488 "is_configured": false, 00:07:17.488 "data_offset": 0, 00:07:17.488 "data_size": 0 00:07:17.488 } 00:07:17.488 ] 00:07:17.488 }' 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.488 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.057 [2024-11-28 16:20:09.612488] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:18.057 [2024-11-28 16:20:09.612588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.057 [2024-11-28 16:20:09.624517] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:18.057 [2024-11-28 16:20:09.626304] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:18.057 [2024-11-28 16:20:09.626350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.057 "name": "Existed_Raid", 00:07:18.057 "uuid": "8a4c5294-34bf-4ad6-ac58-bb92c99780fb", 00:07:18.057 "strip_size_kb": 64, 00:07:18.057 "state": "configuring", 00:07:18.057 "raid_level": "concat", 00:07:18.057 "superblock": true, 00:07:18.057 "num_base_bdevs": 2, 00:07:18.057 "num_base_bdevs_discovered": 1, 00:07:18.057 "num_base_bdevs_operational": 2, 00:07:18.057 "base_bdevs_list": [ 00:07:18.057 { 00:07:18.057 "name": "BaseBdev1", 00:07:18.057 "uuid": "5a6a53e5-94b3-42e5-b33a-8e9b34ce3fba", 00:07:18.057 "is_configured": true, 00:07:18.057 "data_offset": 2048, 00:07:18.057 "data_size": 63488 00:07:18.057 }, 00:07:18.057 { 00:07:18.057 "name": "BaseBdev2", 00:07:18.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.057 "is_configured": false, 00:07:18.057 "data_offset": 0, 00:07:18.057 "data_size": 0 00:07:18.057 } 00:07:18.057 ] 00:07:18.057 }' 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.057 16:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.317 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:18.317 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.317 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.317 [2024-11-28 16:20:10.075421] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:18.317 [2024-11-28 16:20:10.076234] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:18.317 [2024-11-28 16:20:10.076411] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:18.317 BaseBdev2 00:07:18.317 [2024-11-28 16:20:10.077452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:18.317 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.317 [2024-11-28 16:20:10.078020] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:18.317 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:18.317 [2024-11-28 16:20:10.078296] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:18.317 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:18.317 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:18.317 [2024-11-28 16:20:10.078941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.317 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:18.317 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:18.317 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:18.317 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:18.317 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.317 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.577 [ 00:07:18.577 { 00:07:18.577 "name": "BaseBdev2", 00:07:18.577 "aliases": [ 00:07:18.577 "b8a166ad-95fb-4873-b2e8-117452ae0dfa" 00:07:18.577 ], 00:07:18.577 "product_name": "Malloc disk", 00:07:18.577 "block_size": 512, 00:07:18.577 "num_blocks": 65536, 00:07:18.577 "uuid": "b8a166ad-95fb-4873-b2e8-117452ae0dfa", 00:07:18.577 "assigned_rate_limits": { 00:07:18.577 "rw_ios_per_sec": 0, 00:07:18.577 "rw_mbytes_per_sec": 0, 00:07:18.577 "r_mbytes_per_sec": 0, 00:07:18.577 "w_mbytes_per_sec": 0 00:07:18.577 }, 00:07:18.577 "claimed": true, 00:07:18.577 "claim_type": "exclusive_write", 00:07:18.577 "zoned": false, 00:07:18.577 "supported_io_types": { 00:07:18.577 "read": true, 00:07:18.577 "write": true, 00:07:18.577 "unmap": true, 00:07:18.577 "flush": true, 00:07:18.577 "reset": true, 00:07:18.577 "nvme_admin": false, 00:07:18.577 "nvme_io": false, 00:07:18.577 "nvme_io_md": false, 00:07:18.577 "write_zeroes": true, 00:07:18.577 "zcopy": true, 00:07:18.577 "get_zone_info": false, 00:07:18.577 "zone_management": false, 00:07:18.577 "zone_append": false, 00:07:18.577 "compare": false, 00:07:18.577 "compare_and_write": false, 00:07:18.577 "abort": true, 00:07:18.577 "seek_hole": false, 00:07:18.577 "seek_data": false, 00:07:18.577 "copy": true, 00:07:18.577 "nvme_iov_md": false 00:07:18.577 }, 00:07:18.577 "memory_domains": [ 00:07:18.577 { 00:07:18.577 "dma_device_id": "system", 00:07:18.577 "dma_device_type": 1 00:07:18.577 }, 00:07:18.577 { 00:07:18.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.577 "dma_device_type": 2 00:07:18.577 } 00:07:18.577 ], 00:07:18.577 "driver_specific": {} 00:07:18.577 } 00:07:18.577 ] 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.577 "name": "Existed_Raid", 00:07:18.577 "uuid": "8a4c5294-34bf-4ad6-ac58-bb92c99780fb", 00:07:18.577 "strip_size_kb": 64, 00:07:18.577 "state": "online", 00:07:18.577 "raid_level": "concat", 00:07:18.577 "superblock": true, 00:07:18.577 "num_base_bdevs": 2, 00:07:18.577 "num_base_bdevs_discovered": 2, 00:07:18.577 "num_base_bdevs_operational": 2, 00:07:18.577 "base_bdevs_list": [ 00:07:18.577 { 00:07:18.577 "name": "BaseBdev1", 00:07:18.577 "uuid": "5a6a53e5-94b3-42e5-b33a-8e9b34ce3fba", 00:07:18.577 "is_configured": true, 00:07:18.577 "data_offset": 2048, 00:07:18.577 "data_size": 63488 00:07:18.577 }, 00:07:18.577 { 00:07:18.577 "name": "BaseBdev2", 00:07:18.577 "uuid": "b8a166ad-95fb-4873-b2e8-117452ae0dfa", 00:07:18.577 "is_configured": true, 00:07:18.577 "data_offset": 2048, 00:07:18.577 "data_size": 63488 00:07:18.577 } 00:07:18.577 ] 00:07:18.577 }' 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.577 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.837 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:18.837 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:18.837 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:18.837 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:18.837 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:18.837 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:18.837 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:18.837 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:18.837 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.837 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.837 [2024-11-28 16:20:10.550824] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:18.837 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.837 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:18.837 "name": "Existed_Raid", 00:07:18.837 "aliases": [ 00:07:18.837 "8a4c5294-34bf-4ad6-ac58-bb92c99780fb" 00:07:18.837 ], 00:07:18.837 "product_name": "Raid Volume", 00:07:18.837 "block_size": 512, 00:07:18.837 "num_blocks": 126976, 00:07:18.837 "uuid": "8a4c5294-34bf-4ad6-ac58-bb92c99780fb", 00:07:18.837 "assigned_rate_limits": { 00:07:18.837 "rw_ios_per_sec": 0, 00:07:18.837 "rw_mbytes_per_sec": 0, 00:07:18.837 "r_mbytes_per_sec": 0, 00:07:18.837 "w_mbytes_per_sec": 0 00:07:18.837 }, 00:07:18.837 "claimed": false, 00:07:18.837 "zoned": false, 00:07:18.837 "supported_io_types": { 00:07:18.837 "read": true, 00:07:18.837 "write": true, 00:07:18.837 "unmap": true, 00:07:18.837 "flush": true, 00:07:18.837 "reset": true, 00:07:18.837 "nvme_admin": false, 00:07:18.837 "nvme_io": false, 00:07:18.837 "nvme_io_md": false, 00:07:18.837 "write_zeroes": true, 00:07:18.837 "zcopy": false, 00:07:18.837 "get_zone_info": false, 00:07:18.837 "zone_management": false, 00:07:18.837 "zone_append": false, 00:07:18.837 "compare": false, 00:07:18.837 "compare_and_write": false, 00:07:18.837 "abort": false, 00:07:18.837 "seek_hole": false, 00:07:18.837 "seek_data": false, 00:07:18.837 "copy": false, 00:07:18.837 "nvme_iov_md": false 00:07:18.837 }, 00:07:18.837 "memory_domains": [ 00:07:18.837 { 00:07:18.837 "dma_device_id": "system", 00:07:18.837 "dma_device_type": 1 00:07:18.837 }, 00:07:18.837 { 00:07:18.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.837 "dma_device_type": 2 00:07:18.837 }, 00:07:18.837 { 00:07:18.837 "dma_device_id": "system", 00:07:18.837 "dma_device_type": 1 00:07:18.837 }, 00:07:18.837 { 00:07:18.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.837 "dma_device_type": 2 00:07:18.837 } 00:07:18.837 ], 00:07:18.837 "driver_specific": { 00:07:18.837 "raid": { 00:07:18.837 "uuid": "8a4c5294-34bf-4ad6-ac58-bb92c99780fb", 00:07:18.837 "strip_size_kb": 64, 00:07:18.837 "state": "online", 00:07:18.837 "raid_level": "concat", 00:07:18.837 "superblock": true, 00:07:18.837 "num_base_bdevs": 2, 00:07:18.837 "num_base_bdevs_discovered": 2, 00:07:18.837 "num_base_bdevs_operational": 2, 00:07:18.837 "base_bdevs_list": [ 00:07:18.837 { 00:07:18.837 "name": "BaseBdev1", 00:07:18.837 "uuid": "5a6a53e5-94b3-42e5-b33a-8e9b34ce3fba", 00:07:18.837 "is_configured": true, 00:07:18.837 "data_offset": 2048, 00:07:18.837 "data_size": 63488 00:07:18.837 }, 00:07:18.837 { 00:07:18.837 "name": "BaseBdev2", 00:07:18.837 "uuid": "b8a166ad-95fb-4873-b2e8-117452ae0dfa", 00:07:18.837 "is_configured": true, 00:07:18.837 "data_offset": 2048, 00:07:18.837 "data_size": 63488 00:07:18.837 } 00:07:18.837 ] 00:07:18.837 } 00:07:18.837 } 00:07:18.837 }' 00:07:18.837 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:19.097 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:19.097 BaseBdev2' 00:07:19.097 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.097 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:19.097 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.098 [2024-11-28 16:20:10.778190] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:19.098 [2024-11-28 16:20:10.778256] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:19.098 [2024-11-28 16:20:10.778329] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.098 "name": "Existed_Raid", 00:07:19.098 "uuid": "8a4c5294-34bf-4ad6-ac58-bb92c99780fb", 00:07:19.098 "strip_size_kb": 64, 00:07:19.098 "state": "offline", 00:07:19.098 "raid_level": "concat", 00:07:19.098 "superblock": true, 00:07:19.098 "num_base_bdevs": 2, 00:07:19.098 "num_base_bdevs_discovered": 1, 00:07:19.098 "num_base_bdevs_operational": 1, 00:07:19.098 "base_bdevs_list": [ 00:07:19.098 { 00:07:19.098 "name": null, 00:07:19.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.098 "is_configured": false, 00:07:19.098 "data_offset": 0, 00:07:19.098 "data_size": 63488 00:07:19.098 }, 00:07:19.098 { 00:07:19.098 "name": "BaseBdev2", 00:07:19.098 "uuid": "b8a166ad-95fb-4873-b2e8-117452ae0dfa", 00:07:19.098 "is_configured": true, 00:07:19.098 "data_offset": 2048, 00:07:19.098 "data_size": 63488 00:07:19.098 } 00:07:19.098 ] 00:07:19.098 }' 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.098 16:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.668 [2024-11-28 16:20:11.252742] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:19.668 [2024-11-28 16:20:11.252845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73323 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73323 ']' 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73323 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73323 00:07:19.668 killing process with pid 73323 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73323' 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73323 00:07:19.668 [2024-11-28 16:20:11.346414] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:19.668 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73323 00:07:19.668 [2024-11-28 16:20:11.347375] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:19.928 16:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:19.928 00:07:19.928 real 0m3.793s 00:07:19.928 user 0m5.966s 00:07:19.928 sys 0m0.727s 00:07:19.928 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.928 ************************************ 00:07:19.928 END TEST raid_state_function_test_sb 00:07:19.928 ************************************ 00:07:19.928 16:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.928 16:20:11 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:19.928 16:20:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:19.928 16:20:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.928 16:20:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:19.928 ************************************ 00:07:19.928 START TEST raid_superblock_test 00:07:19.928 ************************************ 00:07:19.928 16:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:07:19.928 16:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:19.928 16:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:19.928 16:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:19.928 16:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:19.928 16:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:19.928 16:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:19.928 16:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:19.928 16:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:19.928 16:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:19.928 16:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:19.928 16:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:19.928 16:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:19.928 16:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:19.928 16:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:19.928 16:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:19.928 16:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:19.928 16:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73564 00:07:19.928 16:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:19.928 16:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73564 00:07:19.928 16:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73564 ']' 00:07:19.929 16:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.929 16:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.929 16:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.929 16:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.929 16:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.188 [2024-11-28 16:20:11.746943] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:20.188 [2024-11-28 16:20:11.747164] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73564 ] 00:07:20.188 [2024-11-28 16:20:11.909585] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.188 [2024-11-28 16:20:11.954405] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.448 [2024-11-28 16:20:11.996407] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.448 [2024-11-28 16:20:11.996544] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.018 16:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.018 16:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:21.018 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:21.018 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:21.018 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:21.018 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:21.018 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:21.018 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:21.018 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:21.018 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:21.018 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:21.018 16:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.018 16:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.018 malloc1 00:07:21.018 16:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.018 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:21.018 16:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.018 16:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.019 [2024-11-28 16:20:12.582545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:21.019 [2024-11-28 16:20:12.582655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.019 [2024-11-28 16:20:12.582692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:21.019 [2024-11-28 16:20:12.582734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.019 [2024-11-28 16:20:12.584793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.019 [2024-11-28 16:20:12.584879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:21.019 pt1 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.019 malloc2 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.019 [2024-11-28 16:20:12.626805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:21.019 [2024-11-28 16:20:12.626944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.019 [2024-11-28 16:20:12.626992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:21.019 [2024-11-28 16:20:12.627042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.019 [2024-11-28 16:20:12.629972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.019 [2024-11-28 16:20:12.630065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:21.019 pt2 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.019 [2024-11-28 16:20:12.638879] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:21.019 [2024-11-28 16:20:12.640735] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:21.019 [2024-11-28 16:20:12.640922] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:21.019 [2024-11-28 16:20:12.640971] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:21.019 [2024-11-28 16:20:12.641239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:21.019 [2024-11-28 16:20:12.641405] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:21.019 [2024-11-28 16:20:12.641444] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:21.019 [2024-11-28 16:20:12.641613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.019 "name": "raid_bdev1", 00:07:21.019 "uuid": "7ff19fce-8e94-4c7a-8e97-18cc1b3562df", 00:07:21.019 "strip_size_kb": 64, 00:07:21.019 "state": "online", 00:07:21.019 "raid_level": "concat", 00:07:21.019 "superblock": true, 00:07:21.019 "num_base_bdevs": 2, 00:07:21.019 "num_base_bdevs_discovered": 2, 00:07:21.019 "num_base_bdevs_operational": 2, 00:07:21.019 "base_bdevs_list": [ 00:07:21.019 { 00:07:21.019 "name": "pt1", 00:07:21.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:21.019 "is_configured": true, 00:07:21.019 "data_offset": 2048, 00:07:21.019 "data_size": 63488 00:07:21.019 }, 00:07:21.019 { 00:07:21.019 "name": "pt2", 00:07:21.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:21.019 "is_configured": true, 00:07:21.019 "data_offset": 2048, 00:07:21.019 "data_size": 63488 00:07:21.019 } 00:07:21.019 ] 00:07:21.019 }' 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.019 16:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.280 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:21.280 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:21.280 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:21.280 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:21.280 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:21.280 16:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:21.280 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:21.280 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:21.280 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.280 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.280 [2024-11-28 16:20:13.010469] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.280 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.280 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:21.280 "name": "raid_bdev1", 00:07:21.280 "aliases": [ 00:07:21.280 "7ff19fce-8e94-4c7a-8e97-18cc1b3562df" 00:07:21.280 ], 00:07:21.280 "product_name": "Raid Volume", 00:07:21.280 "block_size": 512, 00:07:21.280 "num_blocks": 126976, 00:07:21.280 "uuid": "7ff19fce-8e94-4c7a-8e97-18cc1b3562df", 00:07:21.280 "assigned_rate_limits": { 00:07:21.280 "rw_ios_per_sec": 0, 00:07:21.280 "rw_mbytes_per_sec": 0, 00:07:21.280 "r_mbytes_per_sec": 0, 00:07:21.280 "w_mbytes_per_sec": 0 00:07:21.280 }, 00:07:21.280 "claimed": false, 00:07:21.280 "zoned": false, 00:07:21.280 "supported_io_types": { 00:07:21.280 "read": true, 00:07:21.280 "write": true, 00:07:21.280 "unmap": true, 00:07:21.280 "flush": true, 00:07:21.280 "reset": true, 00:07:21.280 "nvme_admin": false, 00:07:21.280 "nvme_io": false, 00:07:21.280 "nvme_io_md": false, 00:07:21.280 "write_zeroes": true, 00:07:21.280 "zcopy": false, 00:07:21.280 "get_zone_info": false, 00:07:21.280 "zone_management": false, 00:07:21.280 "zone_append": false, 00:07:21.280 "compare": false, 00:07:21.280 "compare_and_write": false, 00:07:21.280 "abort": false, 00:07:21.280 "seek_hole": false, 00:07:21.280 "seek_data": false, 00:07:21.280 "copy": false, 00:07:21.280 "nvme_iov_md": false 00:07:21.280 }, 00:07:21.280 "memory_domains": [ 00:07:21.280 { 00:07:21.280 "dma_device_id": "system", 00:07:21.280 "dma_device_type": 1 00:07:21.280 }, 00:07:21.280 { 00:07:21.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.280 "dma_device_type": 2 00:07:21.280 }, 00:07:21.280 { 00:07:21.280 "dma_device_id": "system", 00:07:21.280 "dma_device_type": 1 00:07:21.280 }, 00:07:21.280 { 00:07:21.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.280 "dma_device_type": 2 00:07:21.280 } 00:07:21.280 ], 00:07:21.280 "driver_specific": { 00:07:21.280 "raid": { 00:07:21.280 "uuid": "7ff19fce-8e94-4c7a-8e97-18cc1b3562df", 00:07:21.280 "strip_size_kb": 64, 00:07:21.280 "state": "online", 00:07:21.280 "raid_level": "concat", 00:07:21.280 "superblock": true, 00:07:21.280 "num_base_bdevs": 2, 00:07:21.280 "num_base_bdevs_discovered": 2, 00:07:21.280 "num_base_bdevs_operational": 2, 00:07:21.280 "base_bdevs_list": [ 00:07:21.280 { 00:07:21.280 "name": "pt1", 00:07:21.280 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:21.280 "is_configured": true, 00:07:21.280 "data_offset": 2048, 00:07:21.280 "data_size": 63488 00:07:21.280 }, 00:07:21.280 { 00:07:21.280 "name": "pt2", 00:07:21.280 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:21.280 "is_configured": true, 00:07:21.280 "data_offset": 2048, 00:07:21.280 "data_size": 63488 00:07:21.280 } 00:07:21.280 ] 00:07:21.280 } 00:07:21.280 } 00:07:21.280 }' 00:07:21.280 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:21.541 pt2' 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.541 [2024-11-28 16:20:13.190144] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7ff19fce-8e94-4c7a-8e97-18cc1b3562df 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7ff19fce-8e94-4c7a-8e97-18cc1b3562df ']' 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.541 [2024-11-28 16:20:13.233803] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:21.541 [2024-11-28 16:20:13.233886] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:21.541 [2024-11-28 16:20:13.233998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.541 [2024-11-28 16:20:13.234081] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:21.541 [2024-11-28 16:20:13.234145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.541 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.802 [2024-11-28 16:20:13.365642] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:21.802 [2024-11-28 16:20:13.367532] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:21.802 [2024-11-28 16:20:13.367651] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:21.802 [2024-11-28 16:20:13.367744] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:21.802 [2024-11-28 16:20:13.367784] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:21.802 [2024-11-28 16:20:13.367813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:21.802 request: 00:07:21.802 { 00:07:21.802 "name": "raid_bdev1", 00:07:21.802 "raid_level": "concat", 00:07:21.802 "base_bdevs": [ 00:07:21.802 "malloc1", 00:07:21.802 "malloc2" 00:07:21.802 ], 00:07:21.802 "strip_size_kb": 64, 00:07:21.802 "superblock": false, 00:07:21.802 "method": "bdev_raid_create", 00:07:21.802 "req_id": 1 00:07:21.802 } 00:07:21.802 Got JSON-RPC error response 00:07:21.802 response: 00:07:21.802 { 00:07:21.802 "code": -17, 00:07:21.802 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:21.802 } 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.802 [2024-11-28 16:20:13.417478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:21.802 [2024-11-28 16:20:13.417564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.802 [2024-11-28 16:20:13.417597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:21.802 [2024-11-28 16:20:13.417624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.802 [2024-11-28 16:20:13.419713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.802 [2024-11-28 16:20:13.419782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:21.802 [2024-11-28 16:20:13.419895] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:21.802 [2024-11-28 16:20:13.419971] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:21.802 pt1 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.802 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.803 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.803 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.803 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.803 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.803 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.803 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.803 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.803 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:21.803 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.803 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.803 "name": "raid_bdev1", 00:07:21.803 "uuid": "7ff19fce-8e94-4c7a-8e97-18cc1b3562df", 00:07:21.803 "strip_size_kb": 64, 00:07:21.803 "state": "configuring", 00:07:21.803 "raid_level": "concat", 00:07:21.803 "superblock": true, 00:07:21.803 "num_base_bdevs": 2, 00:07:21.803 "num_base_bdevs_discovered": 1, 00:07:21.803 "num_base_bdevs_operational": 2, 00:07:21.803 "base_bdevs_list": [ 00:07:21.803 { 00:07:21.803 "name": "pt1", 00:07:21.803 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:21.803 "is_configured": true, 00:07:21.803 "data_offset": 2048, 00:07:21.803 "data_size": 63488 00:07:21.803 }, 00:07:21.803 { 00:07:21.803 "name": null, 00:07:21.803 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:21.803 "is_configured": false, 00:07:21.803 "data_offset": 2048, 00:07:21.803 "data_size": 63488 00:07:21.803 } 00:07:21.803 ] 00:07:21.803 }' 00:07:21.803 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.803 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.063 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:22.063 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:22.063 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:22.063 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:22.063 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.063 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.063 [2024-11-28 16:20:13.816804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:22.063 [2024-11-28 16:20:13.816919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.063 [2024-11-28 16:20:13.816960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:22.063 [2024-11-28 16:20:13.816988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.063 [2024-11-28 16:20:13.817424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.063 [2024-11-28 16:20:13.817476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:22.063 [2024-11-28 16:20:13.817578] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:22.063 [2024-11-28 16:20:13.817627] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:22.063 [2024-11-28 16:20:13.817743] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:22.063 [2024-11-28 16:20:13.817776] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:22.063 [2024-11-28 16:20:13.818022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:22.063 [2024-11-28 16:20:13.818162] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:22.063 [2024-11-28 16:20:13.818206] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:22.063 [2024-11-28 16:20:13.818338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.063 pt2 00:07:22.063 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.063 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:22.063 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:22.063 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:22.063 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:22.063 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:22.063 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:22.063 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.063 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.063 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.063 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.063 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.063 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.063 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:22.063 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.063 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.063 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.324 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.324 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.324 "name": "raid_bdev1", 00:07:22.324 "uuid": "7ff19fce-8e94-4c7a-8e97-18cc1b3562df", 00:07:22.324 "strip_size_kb": 64, 00:07:22.324 "state": "online", 00:07:22.324 "raid_level": "concat", 00:07:22.324 "superblock": true, 00:07:22.324 "num_base_bdevs": 2, 00:07:22.324 "num_base_bdevs_discovered": 2, 00:07:22.324 "num_base_bdevs_operational": 2, 00:07:22.324 "base_bdevs_list": [ 00:07:22.324 { 00:07:22.324 "name": "pt1", 00:07:22.324 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:22.324 "is_configured": true, 00:07:22.324 "data_offset": 2048, 00:07:22.324 "data_size": 63488 00:07:22.324 }, 00:07:22.324 { 00:07:22.324 "name": "pt2", 00:07:22.324 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:22.324 "is_configured": true, 00:07:22.324 "data_offset": 2048, 00:07:22.324 "data_size": 63488 00:07:22.324 } 00:07:22.324 ] 00:07:22.324 }' 00:07:22.324 16:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.324 16:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.584 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:22.584 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:22.584 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:22.584 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:22.584 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:22.584 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:22.584 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:22.584 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.584 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.584 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:22.584 [2024-11-28 16:20:14.224365] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:22.584 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.584 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:22.584 "name": "raid_bdev1", 00:07:22.584 "aliases": [ 00:07:22.584 "7ff19fce-8e94-4c7a-8e97-18cc1b3562df" 00:07:22.584 ], 00:07:22.584 "product_name": "Raid Volume", 00:07:22.584 "block_size": 512, 00:07:22.584 "num_blocks": 126976, 00:07:22.584 "uuid": "7ff19fce-8e94-4c7a-8e97-18cc1b3562df", 00:07:22.584 "assigned_rate_limits": { 00:07:22.584 "rw_ios_per_sec": 0, 00:07:22.584 "rw_mbytes_per_sec": 0, 00:07:22.584 "r_mbytes_per_sec": 0, 00:07:22.584 "w_mbytes_per_sec": 0 00:07:22.584 }, 00:07:22.584 "claimed": false, 00:07:22.584 "zoned": false, 00:07:22.584 "supported_io_types": { 00:07:22.584 "read": true, 00:07:22.584 "write": true, 00:07:22.584 "unmap": true, 00:07:22.584 "flush": true, 00:07:22.584 "reset": true, 00:07:22.584 "nvme_admin": false, 00:07:22.584 "nvme_io": false, 00:07:22.584 "nvme_io_md": false, 00:07:22.584 "write_zeroes": true, 00:07:22.584 "zcopy": false, 00:07:22.584 "get_zone_info": false, 00:07:22.584 "zone_management": false, 00:07:22.584 "zone_append": false, 00:07:22.584 "compare": false, 00:07:22.584 "compare_and_write": false, 00:07:22.584 "abort": false, 00:07:22.584 "seek_hole": false, 00:07:22.584 "seek_data": false, 00:07:22.584 "copy": false, 00:07:22.584 "nvme_iov_md": false 00:07:22.584 }, 00:07:22.584 "memory_domains": [ 00:07:22.584 { 00:07:22.584 "dma_device_id": "system", 00:07:22.584 "dma_device_type": 1 00:07:22.584 }, 00:07:22.584 { 00:07:22.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.584 "dma_device_type": 2 00:07:22.584 }, 00:07:22.584 { 00:07:22.584 "dma_device_id": "system", 00:07:22.584 "dma_device_type": 1 00:07:22.584 }, 00:07:22.584 { 00:07:22.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.584 "dma_device_type": 2 00:07:22.584 } 00:07:22.584 ], 00:07:22.584 "driver_specific": { 00:07:22.584 "raid": { 00:07:22.584 "uuid": "7ff19fce-8e94-4c7a-8e97-18cc1b3562df", 00:07:22.584 "strip_size_kb": 64, 00:07:22.584 "state": "online", 00:07:22.584 "raid_level": "concat", 00:07:22.584 "superblock": true, 00:07:22.584 "num_base_bdevs": 2, 00:07:22.584 "num_base_bdevs_discovered": 2, 00:07:22.584 "num_base_bdevs_operational": 2, 00:07:22.584 "base_bdevs_list": [ 00:07:22.584 { 00:07:22.584 "name": "pt1", 00:07:22.584 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:22.584 "is_configured": true, 00:07:22.584 "data_offset": 2048, 00:07:22.584 "data_size": 63488 00:07:22.584 }, 00:07:22.584 { 00:07:22.584 "name": "pt2", 00:07:22.584 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:22.584 "is_configured": true, 00:07:22.584 "data_offset": 2048, 00:07:22.584 "data_size": 63488 00:07:22.584 } 00:07:22.584 ] 00:07:22.584 } 00:07:22.584 } 00:07:22.584 }' 00:07:22.584 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:22.584 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:22.584 pt2' 00:07:22.584 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.584 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:22.584 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.844 [2024-11-28 16:20:14.459961] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7ff19fce-8e94-4c7a-8e97-18cc1b3562df '!=' 7ff19fce-8e94-4c7a-8e97-18cc1b3562df ']' 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73564 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73564 ']' 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73564 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73564 00:07:22.844 killing process with pid 73564 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73564' 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 73564 00:07:22.844 [2024-11-28 16:20:14.529254] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:22.844 [2024-11-28 16:20:14.529331] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:22.844 [2024-11-28 16:20:14.529379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:22.844 [2024-11-28 16:20:14.529388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:22.844 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 73564 00:07:22.844 [2024-11-28 16:20:14.552219] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:23.105 16:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:23.105 00:07:23.105 real 0m3.135s 00:07:23.105 user 0m4.769s 00:07:23.105 sys 0m0.653s 00:07:23.105 ************************************ 00:07:23.105 END TEST raid_superblock_test 00:07:23.105 ************************************ 00:07:23.106 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.106 16:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.106 16:20:14 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:23.106 16:20:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:23.106 16:20:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.106 16:20:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:23.106 ************************************ 00:07:23.106 START TEST raid_read_error_test 00:07:23.106 ************************************ 00:07:23.106 16:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:07:23.106 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:23.106 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:23.106 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:23.106 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:23.106 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:23.106 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:23.106 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:23.106 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:23.106 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:23.106 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:23.106 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:23.106 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:23.106 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:23.106 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:23.106 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:23.106 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:23.106 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:23.106 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:23.106 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:23.106 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:23.106 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:23.106 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:23.366 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.PfBtJ2YmG6 00:07:23.366 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73759 00:07:23.366 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:23.366 16:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73759 00:07:23.366 16:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73759 ']' 00:07:23.366 16:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.366 16:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.366 16:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.366 16:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.366 16:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.366 [2024-11-28 16:20:14.960326] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:23.366 [2024-11-28 16:20:14.960522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73759 ] 00:07:23.366 [2024-11-28 16:20:15.109970] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.625 [2024-11-28 16:20:15.155461] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.625 [2024-11-28 16:20:15.198515] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.625 [2024-11-28 16:20:15.198551] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.195 BaseBdev1_malloc 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.195 true 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.195 [2024-11-28 16:20:15.813100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:24.195 [2024-11-28 16:20:15.813188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.195 [2024-11-28 16:20:15.813213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:24.195 [2024-11-28 16:20:15.813230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.195 [2024-11-28 16:20:15.815279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.195 [2024-11-28 16:20:15.815315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:24.195 BaseBdev1 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.195 BaseBdev2_malloc 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.195 true 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.195 [2024-11-28 16:20:15.863291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:24.195 [2024-11-28 16:20:15.863371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.195 [2024-11-28 16:20:15.863402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:24.195 [2024-11-28 16:20:15.863412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.195 [2024-11-28 16:20:15.865384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.195 [2024-11-28 16:20:15.865423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:24.195 BaseBdev2 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.195 [2024-11-28 16:20:15.875295] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:24.195 [2024-11-28 16:20:15.877122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:24.195 [2024-11-28 16:20:15.877313] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:24.195 [2024-11-28 16:20:15.877357] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:24.195 [2024-11-28 16:20:15.877613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:24.195 [2024-11-28 16:20:15.877780] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:24.195 [2024-11-28 16:20:15.877821] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:24.195 [2024-11-28 16:20:15.877997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.195 16:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.196 16:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.196 16:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.196 16:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.196 16:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.196 16:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.196 16:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:24.196 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.196 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.196 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.196 16:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.196 "name": "raid_bdev1", 00:07:24.196 "uuid": "d2ff8833-c195-4159-82d8-27c527527ae9", 00:07:24.196 "strip_size_kb": 64, 00:07:24.196 "state": "online", 00:07:24.196 "raid_level": "concat", 00:07:24.196 "superblock": true, 00:07:24.196 "num_base_bdevs": 2, 00:07:24.196 "num_base_bdevs_discovered": 2, 00:07:24.196 "num_base_bdevs_operational": 2, 00:07:24.196 "base_bdevs_list": [ 00:07:24.196 { 00:07:24.196 "name": "BaseBdev1", 00:07:24.196 "uuid": "ca8ceebb-ff92-5cb0-be99-8850d96cb1d1", 00:07:24.196 "is_configured": true, 00:07:24.196 "data_offset": 2048, 00:07:24.196 "data_size": 63488 00:07:24.196 }, 00:07:24.196 { 00:07:24.196 "name": "BaseBdev2", 00:07:24.196 "uuid": "3eb6070e-e78e-5fb0-8751-f7495e65d8db", 00:07:24.196 "is_configured": true, 00:07:24.196 "data_offset": 2048, 00:07:24.196 "data_size": 63488 00:07:24.196 } 00:07:24.196 ] 00:07:24.196 }' 00:07:24.196 16:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.196 16:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.765 16:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:24.765 16:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:24.765 [2024-11-28 16:20:16.394754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.706 "name": "raid_bdev1", 00:07:25.706 "uuid": "d2ff8833-c195-4159-82d8-27c527527ae9", 00:07:25.706 "strip_size_kb": 64, 00:07:25.706 "state": "online", 00:07:25.706 "raid_level": "concat", 00:07:25.706 "superblock": true, 00:07:25.706 "num_base_bdevs": 2, 00:07:25.706 "num_base_bdevs_discovered": 2, 00:07:25.706 "num_base_bdevs_operational": 2, 00:07:25.706 "base_bdevs_list": [ 00:07:25.706 { 00:07:25.706 "name": "BaseBdev1", 00:07:25.706 "uuid": "ca8ceebb-ff92-5cb0-be99-8850d96cb1d1", 00:07:25.706 "is_configured": true, 00:07:25.706 "data_offset": 2048, 00:07:25.706 "data_size": 63488 00:07:25.706 }, 00:07:25.706 { 00:07:25.706 "name": "BaseBdev2", 00:07:25.706 "uuid": "3eb6070e-e78e-5fb0-8751-f7495e65d8db", 00:07:25.706 "is_configured": true, 00:07:25.706 "data_offset": 2048, 00:07:25.706 "data_size": 63488 00:07:25.706 } 00:07:25.706 ] 00:07:25.706 }' 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.706 16:20:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.275 16:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:26.275 16:20:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.275 16:20:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.275 [2024-11-28 16:20:17.742049] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:26.275 [2024-11-28 16:20:17.742121] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:26.275 [2024-11-28 16:20:17.744613] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:26.275 [2024-11-28 16:20:17.744688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.275 [2024-11-28 16:20:17.744739] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:26.275 [2024-11-28 16:20:17.744781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:26.275 { 00:07:26.275 "results": [ 00:07:26.275 { 00:07:26.275 "job": "raid_bdev1", 00:07:26.275 "core_mask": "0x1", 00:07:26.275 "workload": "randrw", 00:07:26.275 "percentage": 50, 00:07:26.275 "status": "finished", 00:07:26.275 "queue_depth": 1, 00:07:26.275 "io_size": 131072, 00:07:26.275 "runtime": 1.348211, 00:07:26.275 "iops": 18280.52137239646, 00:07:26.275 "mibps": 2285.0651715495574, 00:07:26.275 "io_failed": 1, 00:07:26.275 "io_timeout": 0, 00:07:26.275 "avg_latency_us": 75.72879068162985, 00:07:26.275 "min_latency_us": 24.370305676855896, 00:07:26.275 "max_latency_us": 1366.5257641921398 00:07:26.275 } 00:07:26.275 ], 00:07:26.275 "core_count": 1 00:07:26.275 } 00:07:26.275 16:20:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.275 16:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73759 00:07:26.275 16:20:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73759 ']' 00:07:26.275 16:20:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73759 00:07:26.275 16:20:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:26.275 16:20:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.275 16:20:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73759 00:07:26.275 16:20:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:26.275 16:20:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:26.275 16:20:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73759' 00:07:26.275 killing process with pid 73759 00:07:26.275 16:20:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73759 00:07:26.275 [2024-11-28 16:20:17.777005] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:26.275 16:20:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73759 00:07:26.275 [2024-11-28 16:20:17.792208] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:26.275 16:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:26.275 16:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.PfBtJ2YmG6 00:07:26.275 16:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:26.275 16:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:26.275 16:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:26.275 16:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:26.275 16:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:26.275 16:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:26.275 00:07:26.275 real 0m3.169s 00:07:26.275 user 0m4.002s 00:07:26.275 sys 0m0.494s 00:07:26.275 16:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.275 16:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.275 ************************************ 00:07:26.275 END TEST raid_read_error_test 00:07:26.275 ************************************ 00:07:26.535 16:20:18 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:26.535 16:20:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:26.535 16:20:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.535 16:20:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:26.535 ************************************ 00:07:26.535 START TEST raid_write_error_test 00:07:26.535 ************************************ 00:07:26.535 16:20:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HCUv8DJIKl 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73888 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73888 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73888 ']' 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.536 16:20:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.536 [2024-11-28 16:20:18.197149] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:26.536 [2024-11-28 16:20:18.197353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73888 ] 00:07:26.798 [2024-11-28 16:20:18.357188] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.798 [2024-11-28 16:20:18.403530] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.798 [2024-11-28 16:20:18.446408] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.798 [2024-11-28 16:20:18.446447] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.376 BaseBdev1_malloc 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.376 true 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.376 [2024-11-28 16:20:19.049022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:27.376 [2024-11-28 16:20:19.049110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.376 [2024-11-28 16:20:19.049155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:27.376 [2024-11-28 16:20:19.049190] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.376 [2024-11-28 16:20:19.051239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.376 [2024-11-28 16:20:19.051305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:27.376 BaseBdev1 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.376 BaseBdev2_malloc 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.376 true 00:07:27.376 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.377 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:27.377 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.377 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.377 [2024-11-28 16:20:19.106041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:27.377 [2024-11-28 16:20:19.106140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.377 [2024-11-28 16:20:19.106182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:27.377 [2024-11-28 16:20:19.106218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.377 [2024-11-28 16:20:19.108742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.377 [2024-11-28 16:20:19.108823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:27.377 BaseBdev2 00:07:27.377 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.377 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:27.377 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.377 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.377 [2024-11-28 16:20:19.117974] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:27.377 [2024-11-28 16:20:19.119816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:27.377 [2024-11-28 16:20:19.120029] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:27.377 [2024-11-28 16:20:19.120074] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:27.377 [2024-11-28 16:20:19.120316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:27.377 [2024-11-28 16:20:19.120449] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:27.377 [2024-11-28 16:20:19.120461] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:27.377 [2024-11-28 16:20:19.120576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.377 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.377 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:27.377 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:27.377 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:27.377 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:27.377 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.377 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.377 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.377 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.377 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.377 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.377 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.377 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.377 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.377 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:27.377 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.637 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.637 "name": "raid_bdev1", 00:07:27.637 "uuid": "44161d3a-45c3-44c8-bf1b-b6d9bf866bd1", 00:07:27.637 "strip_size_kb": 64, 00:07:27.637 "state": "online", 00:07:27.637 "raid_level": "concat", 00:07:27.637 "superblock": true, 00:07:27.637 "num_base_bdevs": 2, 00:07:27.637 "num_base_bdevs_discovered": 2, 00:07:27.637 "num_base_bdevs_operational": 2, 00:07:27.637 "base_bdevs_list": [ 00:07:27.637 { 00:07:27.637 "name": "BaseBdev1", 00:07:27.637 "uuid": "514e9cf4-4d4d-5a24-9917-046933179255", 00:07:27.637 "is_configured": true, 00:07:27.637 "data_offset": 2048, 00:07:27.637 "data_size": 63488 00:07:27.637 }, 00:07:27.637 { 00:07:27.637 "name": "BaseBdev2", 00:07:27.637 "uuid": "e8c8988d-916c-53bc-b6f6-5c12797aabcd", 00:07:27.637 "is_configured": true, 00:07:27.637 "data_offset": 2048, 00:07:27.637 "data_size": 63488 00:07:27.637 } 00:07:27.637 ] 00:07:27.637 }' 00:07:27.637 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.637 16:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.897 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:27.897 16:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:28.157 [2024-11-28 16:20:19.697346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.098 "name": "raid_bdev1", 00:07:29.098 "uuid": "44161d3a-45c3-44c8-bf1b-b6d9bf866bd1", 00:07:29.098 "strip_size_kb": 64, 00:07:29.098 "state": "online", 00:07:29.098 "raid_level": "concat", 00:07:29.098 "superblock": true, 00:07:29.098 "num_base_bdevs": 2, 00:07:29.098 "num_base_bdevs_discovered": 2, 00:07:29.098 "num_base_bdevs_operational": 2, 00:07:29.098 "base_bdevs_list": [ 00:07:29.098 { 00:07:29.098 "name": "BaseBdev1", 00:07:29.098 "uuid": "514e9cf4-4d4d-5a24-9917-046933179255", 00:07:29.098 "is_configured": true, 00:07:29.098 "data_offset": 2048, 00:07:29.098 "data_size": 63488 00:07:29.098 }, 00:07:29.098 { 00:07:29.098 "name": "BaseBdev2", 00:07:29.098 "uuid": "e8c8988d-916c-53bc-b6f6-5c12797aabcd", 00:07:29.098 "is_configured": true, 00:07:29.098 "data_offset": 2048, 00:07:29.098 "data_size": 63488 00:07:29.098 } 00:07:29.098 ] 00:07:29.098 }' 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.098 16:20:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.358 16:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:29.358 16:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.358 16:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.358 [2024-11-28 16:20:21.064843] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:29.358 [2024-11-28 16:20:21.064930] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:29.358 [2024-11-28 16:20:21.067369] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.358 [2024-11-28 16:20:21.067447] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.358 [2024-11-28 16:20:21.067497] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:29.358 [2024-11-28 16:20:21.067547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:29.358 { 00:07:29.358 "results": [ 00:07:29.358 { 00:07:29.358 "job": "raid_bdev1", 00:07:29.359 "core_mask": "0x1", 00:07:29.359 "workload": "randrw", 00:07:29.359 "percentage": 50, 00:07:29.359 "status": "finished", 00:07:29.359 "queue_depth": 1, 00:07:29.359 "io_size": 131072, 00:07:29.359 "runtime": 1.368469, 00:07:29.359 "iops": 18225.47679194779, 00:07:29.359 "mibps": 2278.1845989934736, 00:07:29.359 "io_failed": 1, 00:07:29.359 "io_timeout": 0, 00:07:29.359 "avg_latency_us": 75.97619686406087, 00:07:29.359 "min_latency_us": 24.034934497816593, 00:07:29.359 "max_latency_us": 1359.3711790393013 00:07:29.359 } 00:07:29.359 ], 00:07:29.359 "core_count": 1 00:07:29.359 } 00:07:29.359 16:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.359 16:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73888 00:07:29.359 16:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73888 ']' 00:07:29.359 16:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73888 00:07:29.359 16:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:29.359 16:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.359 16:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73888 00:07:29.359 killing process with pid 73888 00:07:29.359 16:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:29.359 16:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:29.359 16:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73888' 00:07:29.359 16:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73888 00:07:29.359 [2024-11-28 16:20:21.103841] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:29.359 16:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73888 00:07:29.359 [2024-11-28 16:20:21.119202] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:29.619 16:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HCUv8DJIKl 00:07:29.619 16:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:29.619 16:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:29.619 16:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:29.619 16:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:29.619 16:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:29.619 16:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:29.619 16:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:29.619 00:07:29.619 real 0m3.266s 00:07:29.619 user 0m4.182s 00:07:29.619 sys 0m0.487s 00:07:29.619 16:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.619 16:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.619 ************************************ 00:07:29.619 END TEST raid_write_error_test 00:07:29.619 ************************************ 00:07:29.879 16:20:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:29.879 16:20:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:29.879 16:20:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:29.879 16:20:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.879 16:20:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:29.879 ************************************ 00:07:29.879 START TEST raid_state_function_test 00:07:29.879 ************************************ 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74015 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74015' 00:07:29.879 Process raid pid: 74015 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74015 00:07:29.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 74015 ']' 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.879 16:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.880 16:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.880 16:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.880 16:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.880 [2024-11-28 16:20:21.525451] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:29.880 [2024-11-28 16:20:21.525650] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.140 [2024-11-28 16:20:21.667818] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.140 [2024-11-28 16:20:21.712992] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.140 [2024-11-28 16:20:21.755932] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.140 [2024-11-28 16:20:21.756046] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.710 [2024-11-28 16:20:22.345668] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:30.710 [2024-11-28 16:20:22.345766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:30.710 [2024-11-28 16:20:22.345797] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:30.710 [2024-11-28 16:20:22.345820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.710 "name": "Existed_Raid", 00:07:30.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.710 "strip_size_kb": 0, 00:07:30.710 "state": "configuring", 00:07:30.710 "raid_level": "raid1", 00:07:30.710 "superblock": false, 00:07:30.710 "num_base_bdevs": 2, 00:07:30.710 "num_base_bdevs_discovered": 0, 00:07:30.710 "num_base_bdevs_operational": 2, 00:07:30.710 "base_bdevs_list": [ 00:07:30.710 { 00:07:30.710 "name": "BaseBdev1", 00:07:30.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.710 "is_configured": false, 00:07:30.710 "data_offset": 0, 00:07:30.710 "data_size": 0 00:07:30.710 }, 00:07:30.710 { 00:07:30.710 "name": "BaseBdev2", 00:07:30.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.710 "is_configured": false, 00:07:30.710 "data_offset": 0, 00:07:30.710 "data_size": 0 00:07:30.710 } 00:07:30.710 ] 00:07:30.710 }' 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.710 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.280 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:31.280 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.280 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.280 [2024-11-28 16:20:22.812762] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:31.280 [2024-11-28 16:20:22.812853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:31.280 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.280 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:31.280 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.280 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.280 [2024-11-28 16:20:22.824774] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:31.280 [2024-11-28 16:20:22.824855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:31.280 [2024-11-28 16:20:22.824882] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:31.280 [2024-11-28 16:20:22.824905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:31.280 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.280 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:31.280 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.280 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.281 [2024-11-28 16:20:22.845470] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:31.281 BaseBdev1 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.281 [ 00:07:31.281 { 00:07:31.281 "name": "BaseBdev1", 00:07:31.281 "aliases": [ 00:07:31.281 "0be0c878-706b-4eea-a8bc-6ba471c3235b" 00:07:31.281 ], 00:07:31.281 "product_name": "Malloc disk", 00:07:31.281 "block_size": 512, 00:07:31.281 "num_blocks": 65536, 00:07:31.281 "uuid": "0be0c878-706b-4eea-a8bc-6ba471c3235b", 00:07:31.281 "assigned_rate_limits": { 00:07:31.281 "rw_ios_per_sec": 0, 00:07:31.281 "rw_mbytes_per_sec": 0, 00:07:31.281 "r_mbytes_per_sec": 0, 00:07:31.281 "w_mbytes_per_sec": 0 00:07:31.281 }, 00:07:31.281 "claimed": true, 00:07:31.281 "claim_type": "exclusive_write", 00:07:31.281 "zoned": false, 00:07:31.281 "supported_io_types": { 00:07:31.281 "read": true, 00:07:31.281 "write": true, 00:07:31.281 "unmap": true, 00:07:31.281 "flush": true, 00:07:31.281 "reset": true, 00:07:31.281 "nvme_admin": false, 00:07:31.281 "nvme_io": false, 00:07:31.281 "nvme_io_md": false, 00:07:31.281 "write_zeroes": true, 00:07:31.281 "zcopy": true, 00:07:31.281 "get_zone_info": false, 00:07:31.281 "zone_management": false, 00:07:31.281 "zone_append": false, 00:07:31.281 "compare": false, 00:07:31.281 "compare_and_write": false, 00:07:31.281 "abort": true, 00:07:31.281 "seek_hole": false, 00:07:31.281 "seek_data": false, 00:07:31.281 "copy": true, 00:07:31.281 "nvme_iov_md": false 00:07:31.281 }, 00:07:31.281 "memory_domains": [ 00:07:31.281 { 00:07:31.281 "dma_device_id": "system", 00:07:31.281 "dma_device_type": 1 00:07:31.281 }, 00:07:31.281 { 00:07:31.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.281 "dma_device_type": 2 00:07:31.281 } 00:07:31.281 ], 00:07:31.281 "driver_specific": {} 00:07:31.281 } 00:07:31.281 ] 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.281 "name": "Existed_Raid", 00:07:31.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.281 "strip_size_kb": 0, 00:07:31.281 "state": "configuring", 00:07:31.281 "raid_level": "raid1", 00:07:31.281 "superblock": false, 00:07:31.281 "num_base_bdevs": 2, 00:07:31.281 "num_base_bdevs_discovered": 1, 00:07:31.281 "num_base_bdevs_operational": 2, 00:07:31.281 "base_bdevs_list": [ 00:07:31.281 { 00:07:31.281 "name": "BaseBdev1", 00:07:31.281 "uuid": "0be0c878-706b-4eea-a8bc-6ba471c3235b", 00:07:31.281 "is_configured": true, 00:07:31.281 "data_offset": 0, 00:07:31.281 "data_size": 65536 00:07:31.281 }, 00:07:31.281 { 00:07:31.281 "name": "BaseBdev2", 00:07:31.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.281 "is_configured": false, 00:07:31.281 "data_offset": 0, 00:07:31.281 "data_size": 0 00:07:31.281 } 00:07:31.281 ] 00:07:31.281 }' 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.281 16:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.851 [2024-11-28 16:20:23.324665] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:31.851 [2024-11-28 16:20:23.324747] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.851 [2024-11-28 16:20:23.336674] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:31.851 [2024-11-28 16:20:23.338421] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:31.851 [2024-11-28 16:20:23.338495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.851 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.851 "name": "Existed_Raid", 00:07:31.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.851 "strip_size_kb": 0, 00:07:31.851 "state": "configuring", 00:07:31.851 "raid_level": "raid1", 00:07:31.851 "superblock": false, 00:07:31.851 "num_base_bdevs": 2, 00:07:31.851 "num_base_bdevs_discovered": 1, 00:07:31.851 "num_base_bdevs_operational": 2, 00:07:31.851 "base_bdevs_list": [ 00:07:31.851 { 00:07:31.851 "name": "BaseBdev1", 00:07:31.851 "uuid": "0be0c878-706b-4eea-a8bc-6ba471c3235b", 00:07:31.852 "is_configured": true, 00:07:31.852 "data_offset": 0, 00:07:31.852 "data_size": 65536 00:07:31.852 }, 00:07:31.852 { 00:07:31.852 "name": "BaseBdev2", 00:07:31.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.852 "is_configured": false, 00:07:31.852 "data_offset": 0, 00:07:31.852 "data_size": 0 00:07:31.852 } 00:07:31.852 ] 00:07:31.852 }' 00:07:31.852 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.852 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.112 [2024-11-28 16:20:23.743155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:32.112 [2024-11-28 16:20:23.743438] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:32.112 [2024-11-28 16:20:23.743537] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:32.112 [2024-11-28 16:20:23.744622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:32.112 [2024-11-28 16:20:23.745253] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:32.112 [2024-11-28 16:20:23.745346] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:32.112 [2024-11-28 16:20:23.745988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.112 BaseBdev2 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.112 [ 00:07:32.112 { 00:07:32.112 "name": "BaseBdev2", 00:07:32.112 "aliases": [ 00:07:32.112 "41eb8a20-ae73-475d-b30b-4dad3834ed70" 00:07:32.112 ], 00:07:32.112 "product_name": "Malloc disk", 00:07:32.112 "block_size": 512, 00:07:32.112 "num_blocks": 65536, 00:07:32.112 "uuid": "41eb8a20-ae73-475d-b30b-4dad3834ed70", 00:07:32.112 "assigned_rate_limits": { 00:07:32.112 "rw_ios_per_sec": 0, 00:07:32.112 "rw_mbytes_per_sec": 0, 00:07:32.112 "r_mbytes_per_sec": 0, 00:07:32.112 "w_mbytes_per_sec": 0 00:07:32.112 }, 00:07:32.112 "claimed": true, 00:07:32.112 "claim_type": "exclusive_write", 00:07:32.112 "zoned": false, 00:07:32.112 "supported_io_types": { 00:07:32.112 "read": true, 00:07:32.112 "write": true, 00:07:32.112 "unmap": true, 00:07:32.112 "flush": true, 00:07:32.112 "reset": true, 00:07:32.112 "nvme_admin": false, 00:07:32.112 "nvme_io": false, 00:07:32.112 "nvme_io_md": false, 00:07:32.112 "write_zeroes": true, 00:07:32.112 "zcopy": true, 00:07:32.112 "get_zone_info": false, 00:07:32.112 "zone_management": false, 00:07:32.112 "zone_append": false, 00:07:32.112 "compare": false, 00:07:32.112 "compare_and_write": false, 00:07:32.112 "abort": true, 00:07:32.112 "seek_hole": false, 00:07:32.112 "seek_data": false, 00:07:32.112 "copy": true, 00:07:32.112 "nvme_iov_md": false 00:07:32.112 }, 00:07:32.112 "memory_domains": [ 00:07:32.112 { 00:07:32.112 "dma_device_id": "system", 00:07:32.112 "dma_device_type": 1 00:07:32.112 }, 00:07:32.112 { 00:07:32.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.112 "dma_device_type": 2 00:07:32.112 } 00:07:32.112 ], 00:07:32.112 "driver_specific": {} 00:07:32.112 } 00:07:32.112 ] 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.112 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.113 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.113 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.113 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.113 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.113 "name": "Existed_Raid", 00:07:32.113 "uuid": "41bd7bf5-6af6-4cec-ade1-64d9ad4ea4cd", 00:07:32.113 "strip_size_kb": 0, 00:07:32.113 "state": "online", 00:07:32.113 "raid_level": "raid1", 00:07:32.113 "superblock": false, 00:07:32.113 "num_base_bdevs": 2, 00:07:32.113 "num_base_bdevs_discovered": 2, 00:07:32.113 "num_base_bdevs_operational": 2, 00:07:32.113 "base_bdevs_list": [ 00:07:32.113 { 00:07:32.113 "name": "BaseBdev1", 00:07:32.113 "uuid": "0be0c878-706b-4eea-a8bc-6ba471c3235b", 00:07:32.113 "is_configured": true, 00:07:32.113 "data_offset": 0, 00:07:32.113 "data_size": 65536 00:07:32.113 }, 00:07:32.113 { 00:07:32.113 "name": "BaseBdev2", 00:07:32.113 "uuid": "41eb8a20-ae73-475d-b30b-4dad3834ed70", 00:07:32.113 "is_configured": true, 00:07:32.113 "data_offset": 0, 00:07:32.113 "data_size": 65536 00:07:32.113 } 00:07:32.113 ] 00:07:32.113 }' 00:07:32.113 16:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.113 16:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.682 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:32.682 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.683 [2024-11-28 16:20:24.226579] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:32.683 "name": "Existed_Raid", 00:07:32.683 "aliases": [ 00:07:32.683 "41bd7bf5-6af6-4cec-ade1-64d9ad4ea4cd" 00:07:32.683 ], 00:07:32.683 "product_name": "Raid Volume", 00:07:32.683 "block_size": 512, 00:07:32.683 "num_blocks": 65536, 00:07:32.683 "uuid": "41bd7bf5-6af6-4cec-ade1-64d9ad4ea4cd", 00:07:32.683 "assigned_rate_limits": { 00:07:32.683 "rw_ios_per_sec": 0, 00:07:32.683 "rw_mbytes_per_sec": 0, 00:07:32.683 "r_mbytes_per_sec": 0, 00:07:32.683 "w_mbytes_per_sec": 0 00:07:32.683 }, 00:07:32.683 "claimed": false, 00:07:32.683 "zoned": false, 00:07:32.683 "supported_io_types": { 00:07:32.683 "read": true, 00:07:32.683 "write": true, 00:07:32.683 "unmap": false, 00:07:32.683 "flush": false, 00:07:32.683 "reset": true, 00:07:32.683 "nvme_admin": false, 00:07:32.683 "nvme_io": false, 00:07:32.683 "nvme_io_md": false, 00:07:32.683 "write_zeroes": true, 00:07:32.683 "zcopy": false, 00:07:32.683 "get_zone_info": false, 00:07:32.683 "zone_management": false, 00:07:32.683 "zone_append": false, 00:07:32.683 "compare": false, 00:07:32.683 "compare_and_write": false, 00:07:32.683 "abort": false, 00:07:32.683 "seek_hole": false, 00:07:32.683 "seek_data": false, 00:07:32.683 "copy": false, 00:07:32.683 "nvme_iov_md": false 00:07:32.683 }, 00:07:32.683 "memory_domains": [ 00:07:32.683 { 00:07:32.683 "dma_device_id": "system", 00:07:32.683 "dma_device_type": 1 00:07:32.683 }, 00:07:32.683 { 00:07:32.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.683 "dma_device_type": 2 00:07:32.683 }, 00:07:32.683 { 00:07:32.683 "dma_device_id": "system", 00:07:32.683 "dma_device_type": 1 00:07:32.683 }, 00:07:32.683 { 00:07:32.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.683 "dma_device_type": 2 00:07:32.683 } 00:07:32.683 ], 00:07:32.683 "driver_specific": { 00:07:32.683 "raid": { 00:07:32.683 "uuid": "41bd7bf5-6af6-4cec-ade1-64d9ad4ea4cd", 00:07:32.683 "strip_size_kb": 0, 00:07:32.683 "state": "online", 00:07:32.683 "raid_level": "raid1", 00:07:32.683 "superblock": false, 00:07:32.683 "num_base_bdevs": 2, 00:07:32.683 "num_base_bdevs_discovered": 2, 00:07:32.683 "num_base_bdevs_operational": 2, 00:07:32.683 "base_bdevs_list": [ 00:07:32.683 { 00:07:32.683 "name": "BaseBdev1", 00:07:32.683 "uuid": "0be0c878-706b-4eea-a8bc-6ba471c3235b", 00:07:32.683 "is_configured": true, 00:07:32.683 "data_offset": 0, 00:07:32.683 "data_size": 65536 00:07:32.683 }, 00:07:32.683 { 00:07:32.683 "name": "BaseBdev2", 00:07:32.683 "uuid": "41eb8a20-ae73-475d-b30b-4dad3834ed70", 00:07:32.683 "is_configured": true, 00:07:32.683 "data_offset": 0, 00:07:32.683 "data_size": 65536 00:07:32.683 } 00:07:32.683 ] 00:07:32.683 } 00:07:32.683 } 00:07:32.683 }' 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:32.683 BaseBdev2' 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.683 [2024-11-28 16:20:24.429980] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.683 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.943 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.943 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.943 "name": "Existed_Raid", 00:07:32.943 "uuid": "41bd7bf5-6af6-4cec-ade1-64d9ad4ea4cd", 00:07:32.943 "strip_size_kb": 0, 00:07:32.943 "state": "online", 00:07:32.943 "raid_level": "raid1", 00:07:32.943 "superblock": false, 00:07:32.943 "num_base_bdevs": 2, 00:07:32.943 "num_base_bdevs_discovered": 1, 00:07:32.943 "num_base_bdevs_operational": 1, 00:07:32.943 "base_bdevs_list": [ 00:07:32.943 { 00:07:32.943 "name": null, 00:07:32.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.943 "is_configured": false, 00:07:32.943 "data_offset": 0, 00:07:32.943 "data_size": 65536 00:07:32.943 }, 00:07:32.943 { 00:07:32.943 "name": "BaseBdev2", 00:07:32.943 "uuid": "41eb8a20-ae73-475d-b30b-4dad3834ed70", 00:07:32.943 "is_configured": true, 00:07:32.943 "data_offset": 0, 00:07:32.943 "data_size": 65536 00:07:32.943 } 00:07:32.943 ] 00:07:32.943 }' 00:07:32.943 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.943 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.203 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:33.203 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:33.203 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.203 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.203 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.203 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:33.203 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.203 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:33.203 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:33.203 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:33.203 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.203 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.203 [2024-11-28 16:20:24.936307] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:33.203 [2024-11-28 16:20:24.936441] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:33.203 [2024-11-28 16:20:24.947904] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.203 [2024-11-28 16:20:24.948022] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.203 [2024-11-28 16:20:24.948063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:33.203 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.203 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:33.203 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:33.203 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:33.203 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.203 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.203 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.203 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.464 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:33.464 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:33.464 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:33.464 16:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74015 00:07:33.464 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 74015 ']' 00:07:33.464 16:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 74015 00:07:33.464 16:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:33.464 16:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:33.464 16:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74015 00:07:33.464 killing process with pid 74015 00:07:33.464 16:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:33.464 16:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:33.464 16:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74015' 00:07:33.464 16:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 74015 00:07:33.464 [2024-11-28 16:20:25.049247] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:33.464 16:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 74015 00:07:33.464 [2024-11-28 16:20:25.050203] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:33.726 16:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:33.726 00:07:33.726 real 0m3.855s 00:07:33.726 user 0m6.038s 00:07:33.726 sys 0m0.774s 00:07:33.726 16:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.726 16:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.726 ************************************ 00:07:33.726 END TEST raid_state_function_test 00:07:33.726 ************************************ 00:07:33.726 16:20:25 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:33.726 16:20:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:33.726 16:20:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.726 16:20:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:33.726 ************************************ 00:07:33.726 START TEST raid_state_function_test_sb 00:07:33.726 ************************************ 00:07:33.726 16:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:07:33.726 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:33.726 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:33.726 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:33.726 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:33.726 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:33.726 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:33.726 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:33.726 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:33.726 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:33.726 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:33.726 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:33.727 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:33.727 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:33.727 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:33.727 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:33.727 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:33.727 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:33.727 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:33.727 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:33.727 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:33.727 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:33.727 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:33.727 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74256 00:07:33.727 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:33.727 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74256' 00:07:33.727 Process raid pid: 74256 00:07:33.727 16:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74256 00:07:33.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.727 16:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 74256 ']' 00:07:33.727 16:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.727 16:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.727 16:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.727 16:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.727 16:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.727 [2024-11-28 16:20:25.454214] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:33.727 [2024-11-28 16:20:25.454438] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.987 [2024-11-28 16:20:25.615960] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.987 [2024-11-28 16:20:25.661799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.987 [2024-11-28 16:20:25.703974] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.987 [2024-11-28 16:20:25.704086] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.558 [2024-11-28 16:20:26.269264] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:34.558 [2024-11-28 16:20:26.269316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:34.558 [2024-11-28 16:20:26.269328] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:34.558 [2024-11-28 16:20:26.269337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.558 "name": "Existed_Raid", 00:07:34.558 "uuid": "4c105bc2-8e5d-481f-bd79-3a62f9889457", 00:07:34.558 "strip_size_kb": 0, 00:07:34.558 "state": "configuring", 00:07:34.558 "raid_level": "raid1", 00:07:34.558 "superblock": true, 00:07:34.558 "num_base_bdevs": 2, 00:07:34.558 "num_base_bdevs_discovered": 0, 00:07:34.558 "num_base_bdevs_operational": 2, 00:07:34.558 "base_bdevs_list": [ 00:07:34.558 { 00:07:34.558 "name": "BaseBdev1", 00:07:34.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.558 "is_configured": false, 00:07:34.558 "data_offset": 0, 00:07:34.558 "data_size": 0 00:07:34.558 }, 00:07:34.558 { 00:07:34.558 "name": "BaseBdev2", 00:07:34.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.558 "is_configured": false, 00:07:34.558 "data_offset": 0, 00:07:34.558 "data_size": 0 00:07:34.558 } 00:07:34.558 ] 00:07:34.558 }' 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.558 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.129 [2024-11-28 16:20:26.704424] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:35.129 [2024-11-28 16:20:26.704512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.129 [2024-11-28 16:20:26.716432] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:35.129 [2024-11-28 16:20:26.716505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:35.129 [2024-11-28 16:20:26.716529] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:35.129 [2024-11-28 16:20:26.716550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.129 [2024-11-28 16:20:26.737120] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:35.129 BaseBdev1 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.129 [ 00:07:35.129 { 00:07:35.129 "name": "BaseBdev1", 00:07:35.129 "aliases": [ 00:07:35.129 "f9a96687-bacf-4fef-b81e-562a75d66a06" 00:07:35.129 ], 00:07:35.129 "product_name": "Malloc disk", 00:07:35.129 "block_size": 512, 00:07:35.129 "num_blocks": 65536, 00:07:35.129 "uuid": "f9a96687-bacf-4fef-b81e-562a75d66a06", 00:07:35.129 "assigned_rate_limits": { 00:07:35.129 "rw_ios_per_sec": 0, 00:07:35.129 "rw_mbytes_per_sec": 0, 00:07:35.129 "r_mbytes_per_sec": 0, 00:07:35.129 "w_mbytes_per_sec": 0 00:07:35.129 }, 00:07:35.129 "claimed": true, 00:07:35.129 "claim_type": "exclusive_write", 00:07:35.129 "zoned": false, 00:07:35.129 "supported_io_types": { 00:07:35.129 "read": true, 00:07:35.129 "write": true, 00:07:35.129 "unmap": true, 00:07:35.129 "flush": true, 00:07:35.129 "reset": true, 00:07:35.129 "nvme_admin": false, 00:07:35.129 "nvme_io": false, 00:07:35.129 "nvme_io_md": false, 00:07:35.129 "write_zeroes": true, 00:07:35.129 "zcopy": true, 00:07:35.129 "get_zone_info": false, 00:07:35.129 "zone_management": false, 00:07:35.129 "zone_append": false, 00:07:35.129 "compare": false, 00:07:35.129 "compare_and_write": false, 00:07:35.129 "abort": true, 00:07:35.129 "seek_hole": false, 00:07:35.129 "seek_data": false, 00:07:35.129 "copy": true, 00:07:35.129 "nvme_iov_md": false 00:07:35.129 }, 00:07:35.129 "memory_domains": [ 00:07:35.129 { 00:07:35.129 "dma_device_id": "system", 00:07:35.129 "dma_device_type": 1 00:07:35.129 }, 00:07:35.129 { 00:07:35.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.129 "dma_device_type": 2 00:07:35.129 } 00:07:35.129 ], 00:07:35.129 "driver_specific": {} 00:07:35.129 } 00:07:35.129 ] 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.129 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.130 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.130 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.130 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:35.130 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.130 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.130 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.130 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.130 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.130 "name": "Existed_Raid", 00:07:35.130 "uuid": "5deb1b62-4151-452f-a91c-492b91e180d5", 00:07:35.130 "strip_size_kb": 0, 00:07:35.130 "state": "configuring", 00:07:35.130 "raid_level": "raid1", 00:07:35.130 "superblock": true, 00:07:35.130 "num_base_bdevs": 2, 00:07:35.130 "num_base_bdevs_discovered": 1, 00:07:35.130 "num_base_bdevs_operational": 2, 00:07:35.130 "base_bdevs_list": [ 00:07:35.130 { 00:07:35.130 "name": "BaseBdev1", 00:07:35.130 "uuid": "f9a96687-bacf-4fef-b81e-562a75d66a06", 00:07:35.130 "is_configured": true, 00:07:35.130 "data_offset": 2048, 00:07:35.130 "data_size": 63488 00:07:35.130 }, 00:07:35.130 { 00:07:35.130 "name": "BaseBdev2", 00:07:35.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.130 "is_configured": false, 00:07:35.130 "data_offset": 0, 00:07:35.130 "data_size": 0 00:07:35.130 } 00:07:35.130 ] 00:07:35.130 }' 00:07:35.130 16:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.130 16:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.700 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:35.700 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.700 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.700 [2024-11-28 16:20:27.208341] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:35.700 [2024-11-28 16:20:27.208430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:35.700 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.700 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:35.700 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.700 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.700 [2024-11-28 16:20:27.220351] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:35.700 [2024-11-28 16:20:27.222156] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:35.700 [2024-11-28 16:20:27.222229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:35.700 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.700 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:35.700 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:35.700 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:35.700 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:35.700 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:35.700 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:35.700 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:35.700 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.700 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.700 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.700 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.701 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.701 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.701 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.701 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:35.701 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.701 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.701 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.701 "name": "Existed_Raid", 00:07:35.701 "uuid": "350e555f-56ba-4074-acc8-d457dcbf52bb", 00:07:35.701 "strip_size_kb": 0, 00:07:35.701 "state": "configuring", 00:07:35.701 "raid_level": "raid1", 00:07:35.701 "superblock": true, 00:07:35.701 "num_base_bdevs": 2, 00:07:35.701 "num_base_bdevs_discovered": 1, 00:07:35.701 "num_base_bdevs_operational": 2, 00:07:35.701 "base_bdevs_list": [ 00:07:35.701 { 00:07:35.701 "name": "BaseBdev1", 00:07:35.701 "uuid": "f9a96687-bacf-4fef-b81e-562a75d66a06", 00:07:35.701 "is_configured": true, 00:07:35.701 "data_offset": 2048, 00:07:35.701 "data_size": 63488 00:07:35.701 }, 00:07:35.701 { 00:07:35.701 "name": "BaseBdev2", 00:07:35.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.701 "is_configured": false, 00:07:35.701 "data_offset": 0, 00:07:35.701 "data_size": 0 00:07:35.701 } 00:07:35.701 ] 00:07:35.701 }' 00:07:35.701 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.701 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.962 [2024-11-28 16:20:27.675219] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:35.962 [2024-11-28 16:20:27.675986] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:35.962 [2024-11-28 16:20:27.676184] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:35.962 BaseBdev2 00:07:35.962 [2024-11-28 16:20:27.677219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:35.962 [2024-11-28 16:20:27.677787] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:35.962 [2024-11-28 16:20:27.677950] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:35.962 [2024-11-28 16:20:27.678261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.962 [ 00:07:35.962 { 00:07:35.962 "name": "BaseBdev2", 00:07:35.962 "aliases": [ 00:07:35.962 "13970124-0853-46fe-8b97-e56bf3ec6d72" 00:07:35.962 ], 00:07:35.962 "product_name": "Malloc disk", 00:07:35.962 "block_size": 512, 00:07:35.962 "num_blocks": 65536, 00:07:35.962 "uuid": "13970124-0853-46fe-8b97-e56bf3ec6d72", 00:07:35.962 "assigned_rate_limits": { 00:07:35.962 "rw_ios_per_sec": 0, 00:07:35.962 "rw_mbytes_per_sec": 0, 00:07:35.962 "r_mbytes_per_sec": 0, 00:07:35.962 "w_mbytes_per_sec": 0 00:07:35.962 }, 00:07:35.962 "claimed": true, 00:07:35.962 "claim_type": "exclusive_write", 00:07:35.962 "zoned": false, 00:07:35.962 "supported_io_types": { 00:07:35.962 "read": true, 00:07:35.962 "write": true, 00:07:35.962 "unmap": true, 00:07:35.962 "flush": true, 00:07:35.962 "reset": true, 00:07:35.962 "nvme_admin": false, 00:07:35.962 "nvme_io": false, 00:07:35.962 "nvme_io_md": false, 00:07:35.962 "write_zeroes": true, 00:07:35.962 "zcopy": true, 00:07:35.962 "get_zone_info": false, 00:07:35.962 "zone_management": false, 00:07:35.962 "zone_append": false, 00:07:35.962 "compare": false, 00:07:35.962 "compare_and_write": false, 00:07:35.962 "abort": true, 00:07:35.962 "seek_hole": false, 00:07:35.962 "seek_data": false, 00:07:35.962 "copy": true, 00:07:35.962 "nvme_iov_md": false 00:07:35.962 }, 00:07:35.962 "memory_domains": [ 00:07:35.962 { 00:07:35.962 "dma_device_id": "system", 00:07:35.962 "dma_device_type": 1 00:07:35.962 }, 00:07:35.962 { 00:07:35.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.962 "dma_device_type": 2 00:07:35.962 } 00:07:35.962 ], 00:07:35.962 "driver_specific": {} 00:07:35.962 } 00:07:35.962 ] 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.962 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.221 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.221 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.221 "name": "Existed_Raid", 00:07:36.221 "uuid": "350e555f-56ba-4074-acc8-d457dcbf52bb", 00:07:36.221 "strip_size_kb": 0, 00:07:36.221 "state": "online", 00:07:36.221 "raid_level": "raid1", 00:07:36.221 "superblock": true, 00:07:36.221 "num_base_bdevs": 2, 00:07:36.221 "num_base_bdevs_discovered": 2, 00:07:36.221 "num_base_bdevs_operational": 2, 00:07:36.221 "base_bdevs_list": [ 00:07:36.221 { 00:07:36.221 "name": "BaseBdev1", 00:07:36.221 "uuid": "f9a96687-bacf-4fef-b81e-562a75d66a06", 00:07:36.221 "is_configured": true, 00:07:36.221 "data_offset": 2048, 00:07:36.221 "data_size": 63488 00:07:36.221 }, 00:07:36.221 { 00:07:36.221 "name": "BaseBdev2", 00:07:36.221 "uuid": "13970124-0853-46fe-8b97-e56bf3ec6d72", 00:07:36.221 "is_configured": true, 00:07:36.221 "data_offset": 2048, 00:07:36.221 "data_size": 63488 00:07:36.221 } 00:07:36.221 ] 00:07:36.221 }' 00:07:36.221 16:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.221 16:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.481 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:36.481 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:36.481 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:36.481 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:36.481 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:36.481 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:36.481 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:36.481 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.481 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.481 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:36.481 [2024-11-28 16:20:28.150632] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:36.481 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.481 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:36.481 "name": "Existed_Raid", 00:07:36.481 "aliases": [ 00:07:36.481 "350e555f-56ba-4074-acc8-d457dcbf52bb" 00:07:36.481 ], 00:07:36.481 "product_name": "Raid Volume", 00:07:36.481 "block_size": 512, 00:07:36.481 "num_blocks": 63488, 00:07:36.481 "uuid": "350e555f-56ba-4074-acc8-d457dcbf52bb", 00:07:36.481 "assigned_rate_limits": { 00:07:36.481 "rw_ios_per_sec": 0, 00:07:36.481 "rw_mbytes_per_sec": 0, 00:07:36.481 "r_mbytes_per_sec": 0, 00:07:36.481 "w_mbytes_per_sec": 0 00:07:36.481 }, 00:07:36.481 "claimed": false, 00:07:36.481 "zoned": false, 00:07:36.481 "supported_io_types": { 00:07:36.481 "read": true, 00:07:36.481 "write": true, 00:07:36.481 "unmap": false, 00:07:36.481 "flush": false, 00:07:36.481 "reset": true, 00:07:36.481 "nvme_admin": false, 00:07:36.481 "nvme_io": false, 00:07:36.481 "nvme_io_md": false, 00:07:36.481 "write_zeroes": true, 00:07:36.481 "zcopy": false, 00:07:36.481 "get_zone_info": false, 00:07:36.481 "zone_management": false, 00:07:36.481 "zone_append": false, 00:07:36.481 "compare": false, 00:07:36.481 "compare_and_write": false, 00:07:36.481 "abort": false, 00:07:36.481 "seek_hole": false, 00:07:36.481 "seek_data": false, 00:07:36.481 "copy": false, 00:07:36.481 "nvme_iov_md": false 00:07:36.481 }, 00:07:36.481 "memory_domains": [ 00:07:36.481 { 00:07:36.481 "dma_device_id": "system", 00:07:36.481 "dma_device_type": 1 00:07:36.481 }, 00:07:36.481 { 00:07:36.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.481 "dma_device_type": 2 00:07:36.481 }, 00:07:36.481 { 00:07:36.481 "dma_device_id": "system", 00:07:36.481 "dma_device_type": 1 00:07:36.481 }, 00:07:36.481 { 00:07:36.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.481 "dma_device_type": 2 00:07:36.481 } 00:07:36.481 ], 00:07:36.481 "driver_specific": { 00:07:36.481 "raid": { 00:07:36.482 "uuid": "350e555f-56ba-4074-acc8-d457dcbf52bb", 00:07:36.482 "strip_size_kb": 0, 00:07:36.482 "state": "online", 00:07:36.482 "raid_level": "raid1", 00:07:36.482 "superblock": true, 00:07:36.482 "num_base_bdevs": 2, 00:07:36.482 "num_base_bdevs_discovered": 2, 00:07:36.482 "num_base_bdevs_operational": 2, 00:07:36.482 "base_bdevs_list": [ 00:07:36.482 { 00:07:36.482 "name": "BaseBdev1", 00:07:36.482 "uuid": "f9a96687-bacf-4fef-b81e-562a75d66a06", 00:07:36.482 "is_configured": true, 00:07:36.482 "data_offset": 2048, 00:07:36.482 "data_size": 63488 00:07:36.482 }, 00:07:36.482 { 00:07:36.482 "name": "BaseBdev2", 00:07:36.482 "uuid": "13970124-0853-46fe-8b97-e56bf3ec6d72", 00:07:36.482 "is_configured": true, 00:07:36.482 "data_offset": 2048, 00:07:36.482 "data_size": 63488 00:07:36.482 } 00:07:36.482 ] 00:07:36.482 } 00:07:36.482 } 00:07:36.482 }' 00:07:36.482 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:36.482 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:36.482 BaseBdev2' 00:07:36.482 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.743 [2024-11-28 16:20:28.362034] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.743 "name": "Existed_Raid", 00:07:36.743 "uuid": "350e555f-56ba-4074-acc8-d457dcbf52bb", 00:07:36.743 "strip_size_kb": 0, 00:07:36.743 "state": "online", 00:07:36.743 "raid_level": "raid1", 00:07:36.743 "superblock": true, 00:07:36.743 "num_base_bdevs": 2, 00:07:36.743 "num_base_bdevs_discovered": 1, 00:07:36.743 "num_base_bdevs_operational": 1, 00:07:36.743 "base_bdevs_list": [ 00:07:36.743 { 00:07:36.743 "name": null, 00:07:36.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.743 "is_configured": false, 00:07:36.743 "data_offset": 0, 00:07:36.743 "data_size": 63488 00:07:36.743 }, 00:07:36.743 { 00:07:36.743 "name": "BaseBdev2", 00:07:36.743 "uuid": "13970124-0853-46fe-8b97-e56bf3ec6d72", 00:07:36.743 "is_configured": true, 00:07:36.743 "data_offset": 2048, 00:07:36.743 "data_size": 63488 00:07:36.743 } 00:07:36.743 ] 00:07:36.743 }' 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.743 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.315 [2024-11-28 16:20:28.852308] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:37.315 [2024-11-28 16:20:28.852460] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:37.315 [2024-11-28 16:20:28.864128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.315 [2024-11-28 16:20:28.864177] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.315 [2024-11-28 16:20:28.864189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74256 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 74256 ']' 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 74256 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74256 00:07:37.315 killing process with pid 74256 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74256' 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 74256 00:07:37.315 [2024-11-28 16:20:28.952683] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:37.315 16:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 74256 00:07:37.315 [2024-11-28 16:20:28.953691] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:37.575 16:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:37.575 00:07:37.575 real 0m3.840s 00:07:37.575 user 0m6.000s 00:07:37.575 sys 0m0.776s 00:07:37.575 16:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.575 16:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.575 ************************************ 00:07:37.575 END TEST raid_state_function_test_sb 00:07:37.575 ************************************ 00:07:37.575 16:20:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:37.575 16:20:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:37.575 16:20:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.575 16:20:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:37.575 ************************************ 00:07:37.575 START TEST raid_superblock_test 00:07:37.575 ************************************ 00:07:37.575 16:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:07:37.575 16:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:37.575 16:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:37.575 16:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:37.575 16:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:37.576 16:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:37.576 16:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:37.576 16:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:37.576 16:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:37.576 16:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:37.576 16:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:37.576 16:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:37.576 16:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:37.576 16:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:37.576 16:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:37.576 16:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:37.576 16:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74493 00:07:37.576 16:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:37.576 16:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74493 00:07:37.576 16:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74493 ']' 00:07:37.576 16:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.576 16:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.576 16:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.576 16:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.576 16:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.835 [2024-11-28 16:20:29.357458] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:37.835 [2024-11-28 16:20:29.357661] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74493 ] 00:07:37.835 [2024-11-28 16:20:29.514995] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.835 [2024-11-28 16:20:29.558891] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.835 [2024-11-28 16:20:29.600647] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.835 [2024-11-28 16:20:29.600765] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.405 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.405 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:38.405 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:38.405 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:38.405 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:38.405 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:38.405 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:38.405 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:38.405 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:38.405 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:38.405 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:38.405 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.405 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.666 malloc1 00:07:38.666 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.666 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:38.666 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.666 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.666 [2024-11-28 16:20:30.186611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:38.666 [2024-11-28 16:20:30.186731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:38.666 [2024-11-28 16:20:30.186779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:38.666 [2024-11-28 16:20:30.186827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:38.666 [2024-11-28 16:20:30.188912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:38.666 [2024-11-28 16:20:30.188989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:38.666 pt1 00:07:38.666 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.666 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:38.666 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:38.666 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:38.666 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:38.666 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:38.666 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.667 malloc2 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.667 [2024-11-28 16:20:30.230737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:38.667 [2024-11-28 16:20:30.230961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:38.667 [2024-11-28 16:20:30.231044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:38.667 [2024-11-28 16:20:30.231118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:38.667 [2024-11-28 16:20:30.235595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:38.667 [2024-11-28 16:20:30.235704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:38.667 pt2 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.667 [2024-11-28 16:20:30.243985] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:38.667 [2024-11-28 16:20:30.246398] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:38.667 [2024-11-28 16:20:30.246583] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:38.667 [2024-11-28 16:20:30.246636] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:38.667 [2024-11-28 16:20:30.246927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:38.667 [2024-11-28 16:20:30.247105] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:38.667 [2024-11-28 16:20:30.247151] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:38.667 [2024-11-28 16:20:30.247337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.667 "name": "raid_bdev1", 00:07:38.667 "uuid": "d9cb6c95-2d51-40c9-b078-2a296e9a56cc", 00:07:38.667 "strip_size_kb": 0, 00:07:38.667 "state": "online", 00:07:38.667 "raid_level": "raid1", 00:07:38.667 "superblock": true, 00:07:38.667 "num_base_bdevs": 2, 00:07:38.667 "num_base_bdevs_discovered": 2, 00:07:38.667 "num_base_bdevs_operational": 2, 00:07:38.667 "base_bdevs_list": [ 00:07:38.667 { 00:07:38.667 "name": "pt1", 00:07:38.667 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:38.667 "is_configured": true, 00:07:38.667 "data_offset": 2048, 00:07:38.667 "data_size": 63488 00:07:38.667 }, 00:07:38.667 { 00:07:38.667 "name": "pt2", 00:07:38.667 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:38.667 "is_configured": true, 00:07:38.667 "data_offset": 2048, 00:07:38.667 "data_size": 63488 00:07:38.667 } 00:07:38.667 ] 00:07:38.667 }' 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.667 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.928 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:38.928 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:38.928 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:38.928 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:38.928 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:38.928 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:38.928 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:38.928 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.928 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.928 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:38.928 [2024-11-28 16:20:30.679420] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.928 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.189 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:39.189 "name": "raid_bdev1", 00:07:39.189 "aliases": [ 00:07:39.189 "d9cb6c95-2d51-40c9-b078-2a296e9a56cc" 00:07:39.189 ], 00:07:39.189 "product_name": "Raid Volume", 00:07:39.189 "block_size": 512, 00:07:39.189 "num_blocks": 63488, 00:07:39.189 "uuid": "d9cb6c95-2d51-40c9-b078-2a296e9a56cc", 00:07:39.189 "assigned_rate_limits": { 00:07:39.189 "rw_ios_per_sec": 0, 00:07:39.189 "rw_mbytes_per_sec": 0, 00:07:39.189 "r_mbytes_per_sec": 0, 00:07:39.189 "w_mbytes_per_sec": 0 00:07:39.189 }, 00:07:39.189 "claimed": false, 00:07:39.189 "zoned": false, 00:07:39.189 "supported_io_types": { 00:07:39.189 "read": true, 00:07:39.189 "write": true, 00:07:39.189 "unmap": false, 00:07:39.189 "flush": false, 00:07:39.189 "reset": true, 00:07:39.189 "nvme_admin": false, 00:07:39.189 "nvme_io": false, 00:07:39.189 "nvme_io_md": false, 00:07:39.189 "write_zeroes": true, 00:07:39.189 "zcopy": false, 00:07:39.189 "get_zone_info": false, 00:07:39.189 "zone_management": false, 00:07:39.189 "zone_append": false, 00:07:39.189 "compare": false, 00:07:39.189 "compare_and_write": false, 00:07:39.189 "abort": false, 00:07:39.189 "seek_hole": false, 00:07:39.189 "seek_data": false, 00:07:39.189 "copy": false, 00:07:39.189 "nvme_iov_md": false 00:07:39.189 }, 00:07:39.189 "memory_domains": [ 00:07:39.189 { 00:07:39.189 "dma_device_id": "system", 00:07:39.189 "dma_device_type": 1 00:07:39.189 }, 00:07:39.189 { 00:07:39.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.189 "dma_device_type": 2 00:07:39.189 }, 00:07:39.189 { 00:07:39.189 "dma_device_id": "system", 00:07:39.189 "dma_device_type": 1 00:07:39.189 }, 00:07:39.189 { 00:07:39.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.189 "dma_device_type": 2 00:07:39.189 } 00:07:39.189 ], 00:07:39.189 "driver_specific": { 00:07:39.189 "raid": { 00:07:39.189 "uuid": "d9cb6c95-2d51-40c9-b078-2a296e9a56cc", 00:07:39.189 "strip_size_kb": 0, 00:07:39.189 "state": "online", 00:07:39.189 "raid_level": "raid1", 00:07:39.189 "superblock": true, 00:07:39.189 "num_base_bdevs": 2, 00:07:39.189 "num_base_bdevs_discovered": 2, 00:07:39.189 "num_base_bdevs_operational": 2, 00:07:39.189 "base_bdevs_list": [ 00:07:39.189 { 00:07:39.189 "name": "pt1", 00:07:39.189 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:39.189 "is_configured": true, 00:07:39.189 "data_offset": 2048, 00:07:39.189 "data_size": 63488 00:07:39.189 }, 00:07:39.189 { 00:07:39.189 "name": "pt2", 00:07:39.189 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:39.189 "is_configured": true, 00:07:39.189 "data_offset": 2048, 00:07:39.189 "data_size": 63488 00:07:39.189 } 00:07:39.189 ] 00:07:39.189 } 00:07:39.189 } 00:07:39.189 }' 00:07:39.189 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:39.189 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:39.189 pt2' 00:07:39.189 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.189 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:39.189 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.189 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:39.189 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.189 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.189 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.189 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.189 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.189 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.189 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.189 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:39.189 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.189 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.189 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.189 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.189 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.189 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.189 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:39.190 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:39.190 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.190 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.190 [2024-11-28 16:20:30.882998] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:39.190 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.190 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d9cb6c95-2d51-40c9-b078-2a296e9a56cc 00:07:39.190 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d9cb6c95-2d51-40c9-b078-2a296e9a56cc ']' 00:07:39.190 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:39.190 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.190 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.190 [2024-11-28 16:20:30.910720] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:39.190 [2024-11-28 16:20:30.910785] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:39.190 [2024-11-28 16:20:30.910889] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.190 [2024-11-28 16:20:30.910987] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:39.190 [2024-11-28 16:20:30.911034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:39.190 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.190 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:39.190 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.190 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.190 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.190 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.190 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:39.190 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:39.190 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:39.190 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:39.190 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.190 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.448 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.448 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:39.448 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:39.448 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.448 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.448 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.448 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:39.448 16:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:39.448 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.448 16:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.448 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.448 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:39.448 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:39.448 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:39.448 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:39.448 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:39.448 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.448 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:39.448 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.448 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:39.448 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.448 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.448 [2024-11-28 16:20:31.026539] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:39.448 [2024-11-28 16:20:31.028416] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:39.449 [2024-11-28 16:20:31.028524] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:39.449 [2024-11-28 16:20:31.028600] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:39.449 [2024-11-28 16:20:31.028637] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:39.449 [2024-11-28 16:20:31.028657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:39.449 request: 00:07:39.449 { 00:07:39.449 "name": "raid_bdev1", 00:07:39.449 "raid_level": "raid1", 00:07:39.449 "base_bdevs": [ 00:07:39.449 "malloc1", 00:07:39.449 "malloc2" 00:07:39.449 ], 00:07:39.449 "superblock": false, 00:07:39.449 "method": "bdev_raid_create", 00:07:39.449 "req_id": 1 00:07:39.449 } 00:07:39.449 Got JSON-RPC error response 00:07:39.449 response: 00:07:39.449 { 00:07:39.449 "code": -17, 00:07:39.449 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:39.449 } 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.449 [2024-11-28 16:20:31.078425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:39.449 [2024-11-28 16:20:31.078504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.449 [2024-11-28 16:20:31.078536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:39.449 [2024-11-28 16:20:31.078563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.449 [2024-11-28 16:20:31.080628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.449 [2024-11-28 16:20:31.080696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:39.449 [2024-11-28 16:20:31.080777] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:39.449 [2024-11-28 16:20:31.080845] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:39.449 pt1 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.449 "name": "raid_bdev1", 00:07:39.449 "uuid": "d9cb6c95-2d51-40c9-b078-2a296e9a56cc", 00:07:39.449 "strip_size_kb": 0, 00:07:39.449 "state": "configuring", 00:07:39.449 "raid_level": "raid1", 00:07:39.449 "superblock": true, 00:07:39.449 "num_base_bdevs": 2, 00:07:39.449 "num_base_bdevs_discovered": 1, 00:07:39.449 "num_base_bdevs_operational": 2, 00:07:39.449 "base_bdevs_list": [ 00:07:39.449 { 00:07:39.449 "name": "pt1", 00:07:39.449 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:39.449 "is_configured": true, 00:07:39.449 "data_offset": 2048, 00:07:39.449 "data_size": 63488 00:07:39.449 }, 00:07:39.449 { 00:07:39.449 "name": null, 00:07:39.449 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:39.449 "is_configured": false, 00:07:39.449 "data_offset": 2048, 00:07:39.449 "data_size": 63488 00:07:39.449 } 00:07:39.449 ] 00:07:39.449 }' 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.449 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.020 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:40.020 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:40.020 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:40.020 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:40.020 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.020 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.020 [2024-11-28 16:20:31.501702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:40.020 [2024-11-28 16:20:31.501795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.020 [2024-11-28 16:20:31.501844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:40.020 [2024-11-28 16:20:31.501873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.020 [2024-11-28 16:20:31.502251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.020 [2024-11-28 16:20:31.502305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:40.020 [2024-11-28 16:20:31.502390] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:40.020 [2024-11-28 16:20:31.502434] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:40.020 [2024-11-28 16:20:31.502533] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:40.020 [2024-11-28 16:20:31.502568] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:40.020 [2024-11-28 16:20:31.502797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:40.020 [2024-11-28 16:20:31.502960] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:40.020 [2024-11-28 16:20:31.503008] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:40.020 [2024-11-28 16:20:31.503135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.020 pt2 00:07:40.020 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.020 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:40.020 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:40.020 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:40.020 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:40.020 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:40.020 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:40.020 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:40.020 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.020 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.020 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.020 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.020 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.020 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.020 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.020 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.020 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.021 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.021 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.021 "name": "raid_bdev1", 00:07:40.021 "uuid": "d9cb6c95-2d51-40c9-b078-2a296e9a56cc", 00:07:40.021 "strip_size_kb": 0, 00:07:40.021 "state": "online", 00:07:40.021 "raid_level": "raid1", 00:07:40.021 "superblock": true, 00:07:40.021 "num_base_bdevs": 2, 00:07:40.021 "num_base_bdevs_discovered": 2, 00:07:40.021 "num_base_bdevs_operational": 2, 00:07:40.021 "base_bdevs_list": [ 00:07:40.021 { 00:07:40.021 "name": "pt1", 00:07:40.021 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:40.021 "is_configured": true, 00:07:40.021 "data_offset": 2048, 00:07:40.021 "data_size": 63488 00:07:40.021 }, 00:07:40.021 { 00:07:40.021 "name": "pt2", 00:07:40.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:40.021 "is_configured": true, 00:07:40.021 "data_offset": 2048, 00:07:40.021 "data_size": 63488 00:07:40.021 } 00:07:40.021 ] 00:07:40.021 }' 00:07:40.021 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.021 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.280 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:40.281 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:40.281 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:40.281 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:40.281 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:40.281 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:40.281 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:40.281 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:40.281 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.281 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.281 [2024-11-28 16:20:31.905244] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.281 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.281 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:40.281 "name": "raid_bdev1", 00:07:40.281 "aliases": [ 00:07:40.281 "d9cb6c95-2d51-40c9-b078-2a296e9a56cc" 00:07:40.281 ], 00:07:40.281 "product_name": "Raid Volume", 00:07:40.281 "block_size": 512, 00:07:40.281 "num_blocks": 63488, 00:07:40.281 "uuid": "d9cb6c95-2d51-40c9-b078-2a296e9a56cc", 00:07:40.281 "assigned_rate_limits": { 00:07:40.281 "rw_ios_per_sec": 0, 00:07:40.281 "rw_mbytes_per_sec": 0, 00:07:40.281 "r_mbytes_per_sec": 0, 00:07:40.281 "w_mbytes_per_sec": 0 00:07:40.281 }, 00:07:40.281 "claimed": false, 00:07:40.281 "zoned": false, 00:07:40.281 "supported_io_types": { 00:07:40.281 "read": true, 00:07:40.281 "write": true, 00:07:40.281 "unmap": false, 00:07:40.281 "flush": false, 00:07:40.281 "reset": true, 00:07:40.281 "nvme_admin": false, 00:07:40.281 "nvme_io": false, 00:07:40.281 "nvme_io_md": false, 00:07:40.281 "write_zeroes": true, 00:07:40.281 "zcopy": false, 00:07:40.281 "get_zone_info": false, 00:07:40.281 "zone_management": false, 00:07:40.281 "zone_append": false, 00:07:40.281 "compare": false, 00:07:40.281 "compare_and_write": false, 00:07:40.281 "abort": false, 00:07:40.281 "seek_hole": false, 00:07:40.281 "seek_data": false, 00:07:40.281 "copy": false, 00:07:40.281 "nvme_iov_md": false 00:07:40.281 }, 00:07:40.281 "memory_domains": [ 00:07:40.281 { 00:07:40.281 "dma_device_id": "system", 00:07:40.281 "dma_device_type": 1 00:07:40.281 }, 00:07:40.281 { 00:07:40.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.281 "dma_device_type": 2 00:07:40.281 }, 00:07:40.281 { 00:07:40.281 "dma_device_id": "system", 00:07:40.281 "dma_device_type": 1 00:07:40.281 }, 00:07:40.281 { 00:07:40.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.281 "dma_device_type": 2 00:07:40.281 } 00:07:40.281 ], 00:07:40.281 "driver_specific": { 00:07:40.281 "raid": { 00:07:40.281 "uuid": "d9cb6c95-2d51-40c9-b078-2a296e9a56cc", 00:07:40.281 "strip_size_kb": 0, 00:07:40.281 "state": "online", 00:07:40.281 "raid_level": "raid1", 00:07:40.281 "superblock": true, 00:07:40.281 "num_base_bdevs": 2, 00:07:40.281 "num_base_bdevs_discovered": 2, 00:07:40.281 "num_base_bdevs_operational": 2, 00:07:40.281 "base_bdevs_list": [ 00:07:40.281 { 00:07:40.281 "name": "pt1", 00:07:40.281 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:40.281 "is_configured": true, 00:07:40.281 "data_offset": 2048, 00:07:40.281 "data_size": 63488 00:07:40.281 }, 00:07:40.281 { 00:07:40.281 "name": "pt2", 00:07:40.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:40.281 "is_configured": true, 00:07:40.281 "data_offset": 2048, 00:07:40.281 "data_size": 63488 00:07:40.281 } 00:07:40.281 ] 00:07:40.281 } 00:07:40.281 } 00:07:40.281 }' 00:07:40.281 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:40.281 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:40.281 pt2' 00:07:40.281 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.281 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:40.281 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:40.281 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:40.281 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.281 16:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.281 16:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.281 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.281 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:40.281 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:40.281 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:40.541 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.542 [2024-11-28 16:20:32.104891] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d9cb6c95-2d51-40c9-b078-2a296e9a56cc '!=' d9cb6c95-2d51-40c9-b078-2a296e9a56cc ']' 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.542 [2024-11-28 16:20:32.148605] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.542 "name": "raid_bdev1", 00:07:40.542 "uuid": "d9cb6c95-2d51-40c9-b078-2a296e9a56cc", 00:07:40.542 "strip_size_kb": 0, 00:07:40.542 "state": "online", 00:07:40.542 "raid_level": "raid1", 00:07:40.542 "superblock": true, 00:07:40.542 "num_base_bdevs": 2, 00:07:40.542 "num_base_bdevs_discovered": 1, 00:07:40.542 "num_base_bdevs_operational": 1, 00:07:40.542 "base_bdevs_list": [ 00:07:40.542 { 00:07:40.542 "name": null, 00:07:40.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.542 "is_configured": false, 00:07:40.542 "data_offset": 0, 00:07:40.542 "data_size": 63488 00:07:40.542 }, 00:07:40.542 { 00:07:40.542 "name": "pt2", 00:07:40.542 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:40.542 "is_configured": true, 00:07:40.542 "data_offset": 2048, 00:07:40.542 "data_size": 63488 00:07:40.542 } 00:07:40.542 ] 00:07:40.542 }' 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.542 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.115 [2024-11-28 16:20:32.595824] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:41.115 [2024-11-28 16:20:32.595874] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.115 [2024-11-28 16:20:32.595956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.115 [2024-11-28 16:20:32.596004] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.115 [2024-11-28 16:20:32.596013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.115 [2024-11-28 16:20:32.667705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:41.115 [2024-11-28 16:20:32.667799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.115 [2024-11-28 16:20:32.667841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:41.115 [2024-11-28 16:20:32.667874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.115 [2024-11-28 16:20:32.669997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.115 [2024-11-28 16:20:32.670075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:41.115 [2024-11-28 16:20:32.670174] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:41.115 [2024-11-28 16:20:32.670240] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:41.115 [2024-11-28 16:20:32.670340] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:07:41.115 [2024-11-28 16:20:32.670375] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:41.115 [2024-11-28 16:20:32.670597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:41.115 [2024-11-28 16:20:32.670742] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:07:41.115 [2024-11-28 16:20:32.670786] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:07:41.115 [2024-11-28 16:20:32.670946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.115 pt2 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.115 "name": "raid_bdev1", 00:07:41.115 "uuid": "d9cb6c95-2d51-40c9-b078-2a296e9a56cc", 00:07:41.115 "strip_size_kb": 0, 00:07:41.115 "state": "online", 00:07:41.115 "raid_level": "raid1", 00:07:41.115 "superblock": true, 00:07:41.115 "num_base_bdevs": 2, 00:07:41.115 "num_base_bdevs_discovered": 1, 00:07:41.115 "num_base_bdevs_operational": 1, 00:07:41.115 "base_bdevs_list": [ 00:07:41.115 { 00:07:41.115 "name": null, 00:07:41.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.115 "is_configured": false, 00:07:41.115 "data_offset": 2048, 00:07:41.115 "data_size": 63488 00:07:41.115 }, 00:07:41.115 { 00:07:41.115 "name": "pt2", 00:07:41.115 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:41.115 "is_configured": true, 00:07:41.115 "data_offset": 2048, 00:07:41.115 "data_size": 63488 00:07:41.115 } 00:07:41.115 ] 00:07:41.115 }' 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.115 16:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.375 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:41.375 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.375 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.375 [2024-11-28 16:20:33.082994] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:41.375 [2024-11-28 16:20:33.083054] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.375 [2024-11-28 16:20:33.083128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.375 [2024-11-28 16:20:33.083182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.375 [2024-11-28 16:20:33.083215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:07:41.375 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.375 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.375 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.375 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:41.375 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.375 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.375 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:41.375 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:41.375 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:41.375 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:41.375 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.375 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.375 [2024-11-28 16:20:33.134902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:41.375 [2024-11-28 16:20:33.134986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.375 [2024-11-28 16:20:33.135021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:41.375 [2024-11-28 16:20:33.135057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.375 [2024-11-28 16:20:33.137157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.375 [2024-11-28 16:20:33.137226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:41.375 [2024-11-28 16:20:33.137306] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:41.375 [2024-11-28 16:20:33.137361] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:41.375 [2024-11-28 16:20:33.137501] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:41.375 [2024-11-28 16:20:33.137562] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:41.375 [2024-11-28 16:20:33.137614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:07:41.375 [2024-11-28 16:20:33.137684] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:41.375 [2024-11-28 16:20:33.137774] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:41.375 [2024-11-28 16:20:33.137814] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:41.375 [2024-11-28 16:20:33.138049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:41.375 [2024-11-28 16:20:33.138193] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:41.375 [2024-11-28 16:20:33.138232] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:07:41.376 [2024-11-28 16:20:33.138367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.376 pt1 00:07:41.376 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.376 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:41.376 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:41.376 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:41.376 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.376 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:41.376 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:41.376 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:41.376 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.376 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.376 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.376 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.635 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.635 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.635 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.635 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.635 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.635 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.635 "name": "raid_bdev1", 00:07:41.635 "uuid": "d9cb6c95-2d51-40c9-b078-2a296e9a56cc", 00:07:41.635 "strip_size_kb": 0, 00:07:41.635 "state": "online", 00:07:41.635 "raid_level": "raid1", 00:07:41.635 "superblock": true, 00:07:41.635 "num_base_bdevs": 2, 00:07:41.635 "num_base_bdevs_discovered": 1, 00:07:41.635 "num_base_bdevs_operational": 1, 00:07:41.635 "base_bdevs_list": [ 00:07:41.635 { 00:07:41.635 "name": null, 00:07:41.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.635 "is_configured": false, 00:07:41.635 "data_offset": 2048, 00:07:41.635 "data_size": 63488 00:07:41.635 }, 00:07:41.635 { 00:07:41.635 "name": "pt2", 00:07:41.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:41.635 "is_configured": true, 00:07:41.635 "data_offset": 2048, 00:07:41.635 "data_size": 63488 00:07:41.635 } 00:07:41.635 ] 00:07:41.635 }' 00:07:41.635 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.635 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.895 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:41.895 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:41.895 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.895 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.895 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.895 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:41.895 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:41.895 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.895 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.895 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:41.895 [2024-11-28 16:20:33.606292] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:41.895 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.895 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d9cb6c95-2d51-40c9-b078-2a296e9a56cc '!=' d9cb6c95-2d51-40c9-b078-2a296e9a56cc ']' 00:07:41.895 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74493 00:07:41.895 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74493 ']' 00:07:41.895 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74493 00:07:41.895 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:41.895 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:41.895 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74493 00:07:42.154 killing process with pid 74493 00:07:42.154 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:42.154 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:42.154 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74493' 00:07:42.154 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74493 00:07:42.154 [2024-11-28 16:20:33.665853] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:42.154 [2024-11-28 16:20:33.665929] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:42.154 [2024-11-28 16:20:33.665971] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:42.154 [2024-11-28 16:20:33.665979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:07:42.154 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74493 00:07:42.154 [2024-11-28 16:20:33.688874] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.415 16:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:42.415 00:07:42.415 real 0m4.661s 00:07:42.415 user 0m7.576s 00:07:42.415 sys 0m0.956s 00:07:42.415 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.415 ************************************ 00:07:42.415 END TEST raid_superblock_test 00:07:42.415 ************************************ 00:07:42.415 16:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.415 16:20:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:42.415 16:20:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:42.415 16:20:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.415 16:20:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:42.415 ************************************ 00:07:42.415 START TEST raid_read_error_test 00:07:42.415 ************************************ 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.vYa7H81XWd 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74812 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74812 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 74812 ']' 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:42.415 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.415 [2024-11-28 16:20:34.106384] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:42.415 [2024-11-28 16:20:34.106503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74812 ] 00:07:42.675 [2024-11-28 16:20:34.264093] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.675 [2024-11-28 16:20:34.307020] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.675 [2024-11-28 16:20:34.347962] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.675 [2024-11-28 16:20:34.348001] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.243 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.243 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:43.243 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:43.243 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:43.243 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.243 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.243 BaseBdev1_malloc 00:07:43.243 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.243 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:43.244 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.244 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.244 true 00:07:43.244 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.244 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:43.244 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.244 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.244 [2024-11-28 16:20:34.953268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:43.244 [2024-11-28 16:20:34.953368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.244 [2024-11-28 16:20:34.953412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:43.244 [2024-11-28 16:20:34.953445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.244 [2024-11-28 16:20:34.955466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.244 [2024-11-28 16:20:34.955534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:43.244 BaseBdev1 00:07:43.244 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.244 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:43.244 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:43.244 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.244 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.244 BaseBdev2_malloc 00:07:43.244 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.244 16:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:43.244 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.244 16:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.244 true 00:07:43.244 16:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.244 16:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:43.244 16:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.244 16:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.244 [2024-11-28 16:20:35.009641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:43.244 [2024-11-28 16:20:35.009729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.244 [2024-11-28 16:20:35.009763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:43.244 [2024-11-28 16:20:35.009794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.244 [2024-11-28 16:20:35.011780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.244 [2024-11-28 16:20:35.011860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:43.503 BaseBdev2 00:07:43.503 16:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.503 16:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:43.503 16:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.503 16:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.503 [2024-11-28 16:20:35.021652] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:43.503 [2024-11-28 16:20:35.023461] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:43.503 [2024-11-28 16:20:35.023663] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:43.503 [2024-11-28 16:20:35.023722] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:43.503 [2024-11-28 16:20:35.024006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:43.503 [2024-11-28 16:20:35.024174] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:43.503 [2024-11-28 16:20:35.024219] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:43.503 [2024-11-28 16:20:35.024379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.503 16:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.503 16:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:43.503 16:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:43.503 16:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.503 16:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:43.503 16:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:43.503 16:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.503 16:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.503 16:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.503 16:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.503 16:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.503 16:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.503 16:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:43.503 16:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.503 16:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.503 16:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.503 16:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.503 "name": "raid_bdev1", 00:07:43.503 "uuid": "174d8007-693b-45c0-9d5b-fdd65bd5b3b2", 00:07:43.503 "strip_size_kb": 0, 00:07:43.503 "state": "online", 00:07:43.503 "raid_level": "raid1", 00:07:43.503 "superblock": true, 00:07:43.503 "num_base_bdevs": 2, 00:07:43.504 "num_base_bdevs_discovered": 2, 00:07:43.504 "num_base_bdevs_operational": 2, 00:07:43.504 "base_bdevs_list": [ 00:07:43.504 { 00:07:43.504 "name": "BaseBdev1", 00:07:43.504 "uuid": "1a2a9b8d-8e9c-5570-86cb-2a6a1f52f97a", 00:07:43.504 "is_configured": true, 00:07:43.504 "data_offset": 2048, 00:07:43.504 "data_size": 63488 00:07:43.504 }, 00:07:43.504 { 00:07:43.504 "name": "BaseBdev2", 00:07:43.504 "uuid": "f6c36cee-901e-5611-baa1-a0569bee20fb", 00:07:43.504 "is_configured": true, 00:07:43.504 "data_offset": 2048, 00:07:43.504 "data_size": 63488 00:07:43.504 } 00:07:43.504 ] 00:07:43.504 }' 00:07:43.504 16:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.504 16:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.762 16:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:43.762 16:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:44.022 [2024-11-28 16:20:35.549117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.961 "name": "raid_bdev1", 00:07:44.961 "uuid": "174d8007-693b-45c0-9d5b-fdd65bd5b3b2", 00:07:44.961 "strip_size_kb": 0, 00:07:44.961 "state": "online", 00:07:44.961 "raid_level": "raid1", 00:07:44.961 "superblock": true, 00:07:44.961 "num_base_bdevs": 2, 00:07:44.961 "num_base_bdevs_discovered": 2, 00:07:44.961 "num_base_bdevs_operational": 2, 00:07:44.961 "base_bdevs_list": [ 00:07:44.961 { 00:07:44.961 "name": "BaseBdev1", 00:07:44.961 "uuid": "1a2a9b8d-8e9c-5570-86cb-2a6a1f52f97a", 00:07:44.961 "is_configured": true, 00:07:44.961 "data_offset": 2048, 00:07:44.961 "data_size": 63488 00:07:44.961 }, 00:07:44.961 { 00:07:44.961 "name": "BaseBdev2", 00:07:44.961 "uuid": "f6c36cee-901e-5611-baa1-a0569bee20fb", 00:07:44.961 "is_configured": true, 00:07:44.961 "data_offset": 2048, 00:07:44.961 "data_size": 63488 00:07:44.961 } 00:07:44.961 ] 00:07:44.961 }' 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.961 16:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.221 16:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:45.221 16:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.221 16:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.221 [2024-11-28 16:20:36.900230] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:45.221 [2024-11-28 16:20:36.900309] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:45.221 [2024-11-28 16:20:36.902588] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.221 [2024-11-28 16:20:36.902675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.221 [2024-11-28 16:20:36.902770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:45.221 [2024-11-28 16:20:36.902852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:45.221 { 00:07:45.221 "results": [ 00:07:45.221 { 00:07:45.221 "job": "raid_bdev1", 00:07:45.221 "core_mask": "0x1", 00:07:45.221 "workload": "randrw", 00:07:45.221 "percentage": 50, 00:07:45.221 "status": "finished", 00:07:45.221 "queue_depth": 1, 00:07:45.221 "io_size": 131072, 00:07:45.221 "runtime": 1.352027, 00:07:45.221 "iops": 20789.525652964032, 00:07:45.222 "mibps": 2598.690706620504, 00:07:45.222 "io_failed": 0, 00:07:45.222 "io_timeout": 0, 00:07:45.222 "avg_latency_us": 45.71095767230949, 00:07:45.222 "min_latency_us": 21.351965065502185, 00:07:45.222 "max_latency_us": 1345.0620087336245 00:07:45.222 } 00:07:45.222 ], 00:07:45.222 "core_count": 1 00:07:45.222 } 00:07:45.222 16:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.222 16:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74812 00:07:45.222 16:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 74812 ']' 00:07:45.222 16:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 74812 00:07:45.222 16:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:45.222 16:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:45.222 16:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74812 00:07:45.222 killing process with pid 74812 00:07:45.222 16:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:45.222 16:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:45.222 16:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74812' 00:07:45.222 16:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 74812 00:07:45.222 [2024-11-28 16:20:36.949925] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:45.222 16:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 74812 00:07:45.222 [2024-11-28 16:20:36.965285] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.482 16:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.vYa7H81XWd 00:07:45.482 16:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:45.482 16:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:45.482 16:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:45.482 16:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:45.482 16:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:45.482 16:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:45.482 16:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:45.482 00:07:45.482 real 0m3.198s 00:07:45.482 user 0m4.046s 00:07:45.482 sys 0m0.494s 00:07:45.482 16:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.482 16:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.482 ************************************ 00:07:45.482 END TEST raid_read_error_test 00:07:45.482 ************************************ 00:07:45.743 16:20:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:07:45.743 16:20:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:45.743 16:20:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.743 16:20:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.743 ************************************ 00:07:45.743 START TEST raid_write_error_test 00:07:45.743 ************************************ 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qPh9HFqVkI 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74941 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74941 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 74941 ']' 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.743 16:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.743 [2024-11-28 16:20:37.384635] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:45.743 [2024-11-28 16:20:37.384745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74941 ] 00:07:46.003 [2024-11-28 16:20:37.540537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.003 [2024-11-28 16:20:37.584253] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.003 [2024-11-28 16:20:37.625930] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.003 [2024-11-28 16:20:37.625965] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.573 BaseBdev1_malloc 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.573 true 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.573 [2024-11-28 16:20:38.243489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:46.573 [2024-11-28 16:20:38.243584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.573 [2024-11-28 16:20:38.243618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:46.573 [2024-11-28 16:20:38.243644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.573 [2024-11-28 16:20:38.245633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.573 [2024-11-28 16:20:38.245701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:46.573 BaseBdev1 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.573 BaseBdev2_malloc 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.573 true 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.573 [2024-11-28 16:20:38.297867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:46.573 [2024-11-28 16:20:38.297962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.573 [2024-11-28 16:20:38.298003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:46.573 [2024-11-28 16:20:38.298037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.573 [2024-11-28 16:20:38.300373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.573 [2024-11-28 16:20:38.300451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:46.573 BaseBdev2 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.573 [2024-11-28 16:20:38.309879] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.573 [2024-11-28 16:20:38.311717] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:46.573 [2024-11-28 16:20:38.311926] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:46.573 [2024-11-28 16:20:38.311975] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:46.573 [2024-11-28 16:20:38.312224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:46.573 [2024-11-28 16:20:38.312387] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:46.573 [2024-11-28 16:20:38.312428] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:46.573 [2024-11-28 16:20:38.312595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.573 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.574 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.574 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.574 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.574 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.574 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.574 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.574 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.574 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.574 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.834 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.834 "name": "raid_bdev1", 00:07:46.834 "uuid": "a83667dc-b2e1-4a22-87a0-81e145d45d54", 00:07:46.834 "strip_size_kb": 0, 00:07:46.834 "state": "online", 00:07:46.834 "raid_level": "raid1", 00:07:46.834 "superblock": true, 00:07:46.834 "num_base_bdevs": 2, 00:07:46.834 "num_base_bdevs_discovered": 2, 00:07:46.834 "num_base_bdevs_operational": 2, 00:07:46.834 "base_bdevs_list": [ 00:07:46.834 { 00:07:46.834 "name": "BaseBdev1", 00:07:46.834 "uuid": "8538988d-0e1f-5972-a57d-b988410ea675", 00:07:46.834 "is_configured": true, 00:07:46.834 "data_offset": 2048, 00:07:46.834 "data_size": 63488 00:07:46.834 }, 00:07:46.834 { 00:07:46.834 "name": "BaseBdev2", 00:07:46.834 "uuid": "ad1ab06b-1816-5ebf-b1ce-7561b93bb03a", 00:07:46.834 "is_configured": true, 00:07:46.834 "data_offset": 2048, 00:07:46.834 "data_size": 63488 00:07:46.834 } 00:07:46.834 ] 00:07:46.834 }' 00:07:46.834 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.834 16:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.094 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:47.094 16:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:47.094 [2024-11-28 16:20:38.861261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:48.033 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:48.033 16:20:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.033 16:20:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.033 [2024-11-28 16:20:39.777489] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:07:48.033 [2024-11-28 16:20:39.777608] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:48.033 [2024-11-28 16:20:39.777825] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:07:48.033 16:20:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.033 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:48.033 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:48.033 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:07:48.033 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:07:48.033 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:48.033 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.033 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.033 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:48.033 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:48.033 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:48.033 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.033 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.033 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.033 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.033 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.033 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.033 16:20:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.033 16:20:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.292 16:20:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.292 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.292 "name": "raid_bdev1", 00:07:48.292 "uuid": "a83667dc-b2e1-4a22-87a0-81e145d45d54", 00:07:48.292 "strip_size_kb": 0, 00:07:48.292 "state": "online", 00:07:48.292 "raid_level": "raid1", 00:07:48.292 "superblock": true, 00:07:48.292 "num_base_bdevs": 2, 00:07:48.292 "num_base_bdevs_discovered": 1, 00:07:48.292 "num_base_bdevs_operational": 1, 00:07:48.292 "base_bdevs_list": [ 00:07:48.292 { 00:07:48.292 "name": null, 00:07:48.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.292 "is_configured": false, 00:07:48.292 "data_offset": 0, 00:07:48.292 "data_size": 63488 00:07:48.292 }, 00:07:48.292 { 00:07:48.292 "name": "BaseBdev2", 00:07:48.292 "uuid": "ad1ab06b-1816-5ebf-b1ce-7561b93bb03a", 00:07:48.292 "is_configured": true, 00:07:48.292 "data_offset": 2048, 00:07:48.292 "data_size": 63488 00:07:48.292 } 00:07:48.292 ] 00:07:48.292 }' 00:07:48.292 16:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.292 16:20:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.552 16:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:48.552 16:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.552 16:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.553 [2024-11-28 16:20:40.247532] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:48.553 [2024-11-28 16:20:40.247616] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:48.553 [2024-11-28 16:20:40.249985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.553 [2024-11-28 16:20:40.250064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.553 [2024-11-28 16:20:40.250131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.553 [2024-11-28 16:20:40.250172] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:48.553 { 00:07:48.553 "results": [ 00:07:48.553 { 00:07:48.553 "job": "raid_bdev1", 00:07:48.553 "core_mask": "0x1", 00:07:48.553 "workload": "randrw", 00:07:48.553 "percentage": 50, 00:07:48.553 "status": "finished", 00:07:48.553 "queue_depth": 1, 00:07:48.553 "io_size": 131072, 00:07:48.553 "runtime": 1.387257, 00:07:48.553 "iops": 24313.44732807259, 00:07:48.553 "mibps": 3039.180916009074, 00:07:48.553 "io_failed": 0, 00:07:48.553 "io_timeout": 0, 00:07:48.553 "avg_latency_us": 38.68837340937741, 00:07:48.553 "min_latency_us": 20.90480349344978, 00:07:48.553 "max_latency_us": 1366.5257641921398 00:07:48.553 } 00:07:48.553 ], 00:07:48.553 "core_count": 1 00:07:48.553 } 00:07:48.553 16:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.553 16:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74941 00:07:48.553 16:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 74941 ']' 00:07:48.553 16:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 74941 00:07:48.553 16:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:48.553 16:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.553 16:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74941 00:07:48.553 killing process with pid 74941 00:07:48.553 16:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:48.553 16:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:48.553 16:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74941' 00:07:48.553 16:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 74941 00:07:48.553 [2024-11-28 16:20:40.297936] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.553 16:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 74941 00:07:48.553 [2024-11-28 16:20:40.312526] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.813 16:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qPh9HFqVkI 00:07:48.813 16:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:48.813 16:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:48.813 ************************************ 00:07:48.813 END TEST raid_write_error_test 00:07:48.813 ************************************ 00:07:48.813 16:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:48.813 16:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:48.813 16:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:48.813 16:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:48.813 16:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:48.813 00:07:48.813 real 0m3.272s 00:07:48.813 user 0m4.170s 00:07:48.813 sys 0m0.520s 00:07:48.813 16:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.813 16:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.073 16:20:40 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:49.073 16:20:40 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:49.073 16:20:40 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:07:49.073 16:20:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:49.073 16:20:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.073 16:20:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.073 ************************************ 00:07:49.074 START TEST raid_state_function_test 00:07:49.074 ************************************ 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75068 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75068' 00:07:49.074 Process raid pid: 75068 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75068 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 75068 ']' 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.074 16:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.074 [2024-11-28 16:20:40.728046] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:49.074 [2024-11-28 16:20:40.728255] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.334 [2024-11-28 16:20:40.888971] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.334 [2024-11-28 16:20:40.934175] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.334 [2024-11-28 16:20:40.975546] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.334 [2024-11-28 16:20:40.975581] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.904 [2024-11-28 16:20:41.548529] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:49.904 [2024-11-28 16:20:41.548623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:49.904 [2024-11-28 16:20:41.548666] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.904 [2024-11-28 16:20:41.548689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.904 [2024-11-28 16:20:41.548707] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:49.904 [2024-11-28 16:20:41.548730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.904 "name": "Existed_Raid", 00:07:49.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.904 "strip_size_kb": 64, 00:07:49.904 "state": "configuring", 00:07:49.904 "raid_level": "raid0", 00:07:49.904 "superblock": false, 00:07:49.904 "num_base_bdevs": 3, 00:07:49.904 "num_base_bdevs_discovered": 0, 00:07:49.904 "num_base_bdevs_operational": 3, 00:07:49.904 "base_bdevs_list": [ 00:07:49.904 { 00:07:49.904 "name": "BaseBdev1", 00:07:49.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.904 "is_configured": false, 00:07:49.904 "data_offset": 0, 00:07:49.904 "data_size": 0 00:07:49.904 }, 00:07:49.904 { 00:07:49.904 "name": "BaseBdev2", 00:07:49.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.904 "is_configured": false, 00:07:49.904 "data_offset": 0, 00:07:49.904 "data_size": 0 00:07:49.904 }, 00:07:49.904 { 00:07:49.904 "name": "BaseBdev3", 00:07:49.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.904 "is_configured": false, 00:07:49.904 "data_offset": 0, 00:07:49.904 "data_size": 0 00:07:49.904 } 00:07:49.904 ] 00:07:49.904 }' 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.904 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.475 [2024-11-28 16:20:41.939793] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:50.475 [2024-11-28 16:20:41.939883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.475 [2024-11-28 16:20:41.951817] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.475 [2024-11-28 16:20:41.951903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.475 [2024-11-28 16:20:41.951929] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.475 [2024-11-28 16:20:41.951952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.475 [2024-11-28 16:20:41.951970] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:50.475 [2024-11-28 16:20:41.951990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.475 [2024-11-28 16:20:41.972349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.475 BaseBdev1 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.475 16:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.475 [ 00:07:50.475 { 00:07:50.475 "name": "BaseBdev1", 00:07:50.475 "aliases": [ 00:07:50.475 "e5b7bb11-980d-4ac3-9573-2b0668565fcf" 00:07:50.475 ], 00:07:50.475 "product_name": "Malloc disk", 00:07:50.475 "block_size": 512, 00:07:50.475 "num_blocks": 65536, 00:07:50.475 "uuid": "e5b7bb11-980d-4ac3-9573-2b0668565fcf", 00:07:50.475 "assigned_rate_limits": { 00:07:50.475 "rw_ios_per_sec": 0, 00:07:50.475 "rw_mbytes_per_sec": 0, 00:07:50.475 "r_mbytes_per_sec": 0, 00:07:50.475 "w_mbytes_per_sec": 0 00:07:50.475 }, 00:07:50.475 "claimed": true, 00:07:50.476 "claim_type": "exclusive_write", 00:07:50.476 "zoned": false, 00:07:50.476 "supported_io_types": { 00:07:50.476 "read": true, 00:07:50.476 "write": true, 00:07:50.476 "unmap": true, 00:07:50.476 "flush": true, 00:07:50.476 "reset": true, 00:07:50.476 "nvme_admin": false, 00:07:50.476 "nvme_io": false, 00:07:50.476 "nvme_io_md": false, 00:07:50.476 "write_zeroes": true, 00:07:50.476 "zcopy": true, 00:07:50.476 "get_zone_info": false, 00:07:50.476 "zone_management": false, 00:07:50.476 "zone_append": false, 00:07:50.476 "compare": false, 00:07:50.476 "compare_and_write": false, 00:07:50.476 "abort": true, 00:07:50.476 "seek_hole": false, 00:07:50.476 "seek_data": false, 00:07:50.476 "copy": true, 00:07:50.476 "nvme_iov_md": false 00:07:50.476 }, 00:07:50.476 "memory_domains": [ 00:07:50.476 { 00:07:50.476 "dma_device_id": "system", 00:07:50.476 "dma_device_type": 1 00:07:50.476 }, 00:07:50.476 { 00:07:50.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.476 "dma_device_type": 2 00:07:50.476 } 00:07:50.476 ], 00:07:50.476 "driver_specific": {} 00:07:50.476 } 00:07:50.476 ] 00:07:50.476 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.476 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:50.476 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:50.476 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.476 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.476 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.476 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.476 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:50.476 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.476 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.476 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.476 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.476 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.476 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.476 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.476 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.476 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.476 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.476 "name": "Existed_Raid", 00:07:50.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.476 "strip_size_kb": 64, 00:07:50.476 "state": "configuring", 00:07:50.476 "raid_level": "raid0", 00:07:50.476 "superblock": false, 00:07:50.476 "num_base_bdevs": 3, 00:07:50.476 "num_base_bdevs_discovered": 1, 00:07:50.476 "num_base_bdevs_operational": 3, 00:07:50.476 "base_bdevs_list": [ 00:07:50.476 { 00:07:50.476 "name": "BaseBdev1", 00:07:50.476 "uuid": "e5b7bb11-980d-4ac3-9573-2b0668565fcf", 00:07:50.476 "is_configured": true, 00:07:50.476 "data_offset": 0, 00:07:50.476 "data_size": 65536 00:07:50.476 }, 00:07:50.476 { 00:07:50.476 "name": "BaseBdev2", 00:07:50.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.476 "is_configured": false, 00:07:50.476 "data_offset": 0, 00:07:50.476 "data_size": 0 00:07:50.476 }, 00:07:50.476 { 00:07:50.476 "name": "BaseBdev3", 00:07:50.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.476 "is_configured": false, 00:07:50.476 "data_offset": 0, 00:07:50.476 "data_size": 0 00:07:50.476 } 00:07:50.476 ] 00:07:50.476 }' 00:07:50.476 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.476 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.737 [2024-11-28 16:20:42.371702] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:50.737 [2024-11-28 16:20:42.371782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.737 [2024-11-28 16:20:42.383709] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.737 [2024-11-28 16:20:42.385459] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.737 [2024-11-28 16:20:42.385529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.737 [2024-11-28 16:20:42.385556] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:50.737 [2024-11-28 16:20:42.385579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.737 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.737 "name": "Existed_Raid", 00:07:50.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.737 "strip_size_kb": 64, 00:07:50.737 "state": "configuring", 00:07:50.737 "raid_level": "raid0", 00:07:50.737 "superblock": false, 00:07:50.737 "num_base_bdevs": 3, 00:07:50.737 "num_base_bdevs_discovered": 1, 00:07:50.737 "num_base_bdevs_operational": 3, 00:07:50.737 "base_bdevs_list": [ 00:07:50.737 { 00:07:50.737 "name": "BaseBdev1", 00:07:50.737 "uuid": "e5b7bb11-980d-4ac3-9573-2b0668565fcf", 00:07:50.737 "is_configured": true, 00:07:50.737 "data_offset": 0, 00:07:50.737 "data_size": 65536 00:07:50.737 }, 00:07:50.737 { 00:07:50.737 "name": "BaseBdev2", 00:07:50.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.737 "is_configured": false, 00:07:50.737 "data_offset": 0, 00:07:50.737 "data_size": 0 00:07:50.737 }, 00:07:50.737 { 00:07:50.738 "name": "BaseBdev3", 00:07:50.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.738 "is_configured": false, 00:07:50.738 "data_offset": 0, 00:07:50.738 "data_size": 0 00:07:50.738 } 00:07:50.738 ] 00:07:50.738 }' 00:07:50.738 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.738 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.309 [2024-11-28 16:20:42.883414] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:51.309 BaseBdev2 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.309 [ 00:07:51.309 { 00:07:51.309 "name": "BaseBdev2", 00:07:51.309 "aliases": [ 00:07:51.309 "e898abcd-a6a6-4720-917a-1f193dad8623" 00:07:51.309 ], 00:07:51.309 "product_name": "Malloc disk", 00:07:51.309 "block_size": 512, 00:07:51.309 "num_blocks": 65536, 00:07:51.309 "uuid": "e898abcd-a6a6-4720-917a-1f193dad8623", 00:07:51.309 "assigned_rate_limits": { 00:07:51.309 "rw_ios_per_sec": 0, 00:07:51.309 "rw_mbytes_per_sec": 0, 00:07:51.309 "r_mbytes_per_sec": 0, 00:07:51.309 "w_mbytes_per_sec": 0 00:07:51.309 }, 00:07:51.309 "claimed": true, 00:07:51.309 "claim_type": "exclusive_write", 00:07:51.309 "zoned": false, 00:07:51.309 "supported_io_types": { 00:07:51.309 "read": true, 00:07:51.309 "write": true, 00:07:51.309 "unmap": true, 00:07:51.309 "flush": true, 00:07:51.309 "reset": true, 00:07:51.309 "nvme_admin": false, 00:07:51.309 "nvme_io": false, 00:07:51.309 "nvme_io_md": false, 00:07:51.309 "write_zeroes": true, 00:07:51.309 "zcopy": true, 00:07:51.309 "get_zone_info": false, 00:07:51.309 "zone_management": false, 00:07:51.309 "zone_append": false, 00:07:51.309 "compare": false, 00:07:51.309 "compare_and_write": false, 00:07:51.309 "abort": true, 00:07:51.309 "seek_hole": false, 00:07:51.309 "seek_data": false, 00:07:51.309 "copy": true, 00:07:51.309 "nvme_iov_md": false 00:07:51.309 }, 00:07:51.309 "memory_domains": [ 00:07:51.309 { 00:07:51.309 "dma_device_id": "system", 00:07:51.309 "dma_device_type": 1 00:07:51.309 }, 00:07:51.309 { 00:07:51.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.309 "dma_device_type": 2 00:07:51.309 } 00:07:51.309 ], 00:07:51.309 "driver_specific": {} 00:07:51.309 } 00:07:51.309 ] 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.309 "name": "Existed_Raid", 00:07:51.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.309 "strip_size_kb": 64, 00:07:51.309 "state": "configuring", 00:07:51.309 "raid_level": "raid0", 00:07:51.309 "superblock": false, 00:07:51.309 "num_base_bdevs": 3, 00:07:51.309 "num_base_bdevs_discovered": 2, 00:07:51.309 "num_base_bdevs_operational": 3, 00:07:51.309 "base_bdevs_list": [ 00:07:51.309 { 00:07:51.309 "name": "BaseBdev1", 00:07:51.309 "uuid": "e5b7bb11-980d-4ac3-9573-2b0668565fcf", 00:07:51.309 "is_configured": true, 00:07:51.309 "data_offset": 0, 00:07:51.309 "data_size": 65536 00:07:51.309 }, 00:07:51.309 { 00:07:51.309 "name": "BaseBdev2", 00:07:51.309 "uuid": "e898abcd-a6a6-4720-917a-1f193dad8623", 00:07:51.309 "is_configured": true, 00:07:51.309 "data_offset": 0, 00:07:51.309 "data_size": 65536 00:07:51.309 }, 00:07:51.309 { 00:07:51.309 "name": "BaseBdev3", 00:07:51.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.309 "is_configured": false, 00:07:51.309 "data_offset": 0, 00:07:51.309 "data_size": 0 00:07:51.309 } 00:07:51.309 ] 00:07:51.309 }' 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.309 16:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.879 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:51.879 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.879 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.879 [2024-11-28 16:20:43.377346] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:51.879 [2024-11-28 16:20:43.377437] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:51.879 [2024-11-28 16:20:43.377465] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:51.879 [2024-11-28 16:20:43.377771] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:51.879 [2024-11-28 16:20:43.377954] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:51.879 [2024-11-28 16:20:43.377999] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:51.879 [2024-11-28 16:20:43.378233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.879 BaseBdev3 00:07:51.879 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.879 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:51.879 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:51.879 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:51.879 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:51.879 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:51.879 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:51.879 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:51.879 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.879 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.879 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.879 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.880 [ 00:07:51.880 { 00:07:51.880 "name": "BaseBdev3", 00:07:51.880 "aliases": [ 00:07:51.880 "7c0bb1d9-43fa-42c4-aea7-2480dc24b7c7" 00:07:51.880 ], 00:07:51.880 "product_name": "Malloc disk", 00:07:51.880 "block_size": 512, 00:07:51.880 "num_blocks": 65536, 00:07:51.880 "uuid": "7c0bb1d9-43fa-42c4-aea7-2480dc24b7c7", 00:07:51.880 "assigned_rate_limits": { 00:07:51.880 "rw_ios_per_sec": 0, 00:07:51.880 "rw_mbytes_per_sec": 0, 00:07:51.880 "r_mbytes_per_sec": 0, 00:07:51.880 "w_mbytes_per_sec": 0 00:07:51.880 }, 00:07:51.880 "claimed": true, 00:07:51.880 "claim_type": "exclusive_write", 00:07:51.880 "zoned": false, 00:07:51.880 "supported_io_types": { 00:07:51.880 "read": true, 00:07:51.880 "write": true, 00:07:51.880 "unmap": true, 00:07:51.880 "flush": true, 00:07:51.880 "reset": true, 00:07:51.880 "nvme_admin": false, 00:07:51.880 "nvme_io": false, 00:07:51.880 "nvme_io_md": false, 00:07:51.880 "write_zeroes": true, 00:07:51.880 "zcopy": true, 00:07:51.880 "get_zone_info": false, 00:07:51.880 "zone_management": false, 00:07:51.880 "zone_append": false, 00:07:51.880 "compare": false, 00:07:51.880 "compare_and_write": false, 00:07:51.880 "abort": true, 00:07:51.880 "seek_hole": false, 00:07:51.880 "seek_data": false, 00:07:51.880 "copy": true, 00:07:51.880 "nvme_iov_md": false 00:07:51.880 }, 00:07:51.880 "memory_domains": [ 00:07:51.880 { 00:07:51.880 "dma_device_id": "system", 00:07:51.880 "dma_device_type": 1 00:07:51.880 }, 00:07:51.880 { 00:07:51.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.880 "dma_device_type": 2 00:07:51.880 } 00:07:51.880 ], 00:07:51.880 "driver_specific": {} 00:07:51.880 } 00:07:51.880 ] 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.880 "name": "Existed_Raid", 00:07:51.880 "uuid": "d62d3c31-28cb-4942-82cb-ca1fd51624de", 00:07:51.880 "strip_size_kb": 64, 00:07:51.880 "state": "online", 00:07:51.880 "raid_level": "raid0", 00:07:51.880 "superblock": false, 00:07:51.880 "num_base_bdevs": 3, 00:07:51.880 "num_base_bdevs_discovered": 3, 00:07:51.880 "num_base_bdevs_operational": 3, 00:07:51.880 "base_bdevs_list": [ 00:07:51.880 { 00:07:51.880 "name": "BaseBdev1", 00:07:51.880 "uuid": "e5b7bb11-980d-4ac3-9573-2b0668565fcf", 00:07:51.880 "is_configured": true, 00:07:51.880 "data_offset": 0, 00:07:51.880 "data_size": 65536 00:07:51.880 }, 00:07:51.880 { 00:07:51.880 "name": "BaseBdev2", 00:07:51.880 "uuid": "e898abcd-a6a6-4720-917a-1f193dad8623", 00:07:51.880 "is_configured": true, 00:07:51.880 "data_offset": 0, 00:07:51.880 "data_size": 65536 00:07:51.880 }, 00:07:51.880 { 00:07:51.880 "name": "BaseBdev3", 00:07:51.880 "uuid": "7c0bb1d9-43fa-42c4-aea7-2480dc24b7c7", 00:07:51.880 "is_configured": true, 00:07:51.880 "data_offset": 0, 00:07:51.880 "data_size": 65536 00:07:51.880 } 00:07:51.880 ] 00:07:51.880 }' 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.880 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.140 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:52.140 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:52.140 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:52.140 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:52.140 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:52.140 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:52.140 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:52.140 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:52.140 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.140 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.140 [2024-11-28 16:20:43.824865] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.140 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.140 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:52.140 "name": "Existed_Raid", 00:07:52.140 "aliases": [ 00:07:52.140 "d62d3c31-28cb-4942-82cb-ca1fd51624de" 00:07:52.140 ], 00:07:52.140 "product_name": "Raid Volume", 00:07:52.140 "block_size": 512, 00:07:52.140 "num_blocks": 196608, 00:07:52.140 "uuid": "d62d3c31-28cb-4942-82cb-ca1fd51624de", 00:07:52.140 "assigned_rate_limits": { 00:07:52.140 "rw_ios_per_sec": 0, 00:07:52.140 "rw_mbytes_per_sec": 0, 00:07:52.140 "r_mbytes_per_sec": 0, 00:07:52.140 "w_mbytes_per_sec": 0 00:07:52.140 }, 00:07:52.140 "claimed": false, 00:07:52.140 "zoned": false, 00:07:52.140 "supported_io_types": { 00:07:52.140 "read": true, 00:07:52.140 "write": true, 00:07:52.140 "unmap": true, 00:07:52.140 "flush": true, 00:07:52.140 "reset": true, 00:07:52.140 "nvme_admin": false, 00:07:52.140 "nvme_io": false, 00:07:52.140 "nvme_io_md": false, 00:07:52.140 "write_zeroes": true, 00:07:52.140 "zcopy": false, 00:07:52.140 "get_zone_info": false, 00:07:52.140 "zone_management": false, 00:07:52.140 "zone_append": false, 00:07:52.140 "compare": false, 00:07:52.140 "compare_and_write": false, 00:07:52.140 "abort": false, 00:07:52.140 "seek_hole": false, 00:07:52.140 "seek_data": false, 00:07:52.140 "copy": false, 00:07:52.140 "nvme_iov_md": false 00:07:52.140 }, 00:07:52.140 "memory_domains": [ 00:07:52.140 { 00:07:52.140 "dma_device_id": "system", 00:07:52.140 "dma_device_type": 1 00:07:52.140 }, 00:07:52.140 { 00:07:52.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.140 "dma_device_type": 2 00:07:52.140 }, 00:07:52.140 { 00:07:52.140 "dma_device_id": "system", 00:07:52.140 "dma_device_type": 1 00:07:52.140 }, 00:07:52.140 { 00:07:52.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.140 "dma_device_type": 2 00:07:52.140 }, 00:07:52.140 { 00:07:52.140 "dma_device_id": "system", 00:07:52.140 "dma_device_type": 1 00:07:52.140 }, 00:07:52.140 { 00:07:52.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.140 "dma_device_type": 2 00:07:52.140 } 00:07:52.140 ], 00:07:52.140 "driver_specific": { 00:07:52.140 "raid": { 00:07:52.140 "uuid": "d62d3c31-28cb-4942-82cb-ca1fd51624de", 00:07:52.140 "strip_size_kb": 64, 00:07:52.140 "state": "online", 00:07:52.140 "raid_level": "raid0", 00:07:52.140 "superblock": false, 00:07:52.140 "num_base_bdevs": 3, 00:07:52.140 "num_base_bdevs_discovered": 3, 00:07:52.140 "num_base_bdevs_operational": 3, 00:07:52.140 "base_bdevs_list": [ 00:07:52.140 { 00:07:52.140 "name": "BaseBdev1", 00:07:52.140 "uuid": "e5b7bb11-980d-4ac3-9573-2b0668565fcf", 00:07:52.140 "is_configured": true, 00:07:52.140 "data_offset": 0, 00:07:52.140 "data_size": 65536 00:07:52.140 }, 00:07:52.140 { 00:07:52.140 "name": "BaseBdev2", 00:07:52.140 "uuid": "e898abcd-a6a6-4720-917a-1f193dad8623", 00:07:52.140 "is_configured": true, 00:07:52.140 "data_offset": 0, 00:07:52.140 "data_size": 65536 00:07:52.140 }, 00:07:52.140 { 00:07:52.140 "name": "BaseBdev3", 00:07:52.140 "uuid": "7c0bb1d9-43fa-42c4-aea7-2480dc24b7c7", 00:07:52.140 "is_configured": true, 00:07:52.140 "data_offset": 0, 00:07:52.140 "data_size": 65536 00:07:52.140 } 00:07:52.140 ] 00:07:52.140 } 00:07:52.140 } 00:07:52.140 }' 00:07:52.140 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:52.140 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:52.140 BaseBdev2 00:07:52.140 BaseBdev3' 00:07:52.140 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.400 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:52.400 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.400 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:52.400 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.400 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.400 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.400 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.400 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.400 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.400 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.400 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:52.400 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.401 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.401 16:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.401 16:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.401 [2024-11-28 16:20:44.064259] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:52.401 [2024-11-28 16:20:44.064323] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.401 [2024-11-28 16:20:44.064394] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.401 "name": "Existed_Raid", 00:07:52.401 "uuid": "d62d3c31-28cb-4942-82cb-ca1fd51624de", 00:07:52.401 "strip_size_kb": 64, 00:07:52.401 "state": "offline", 00:07:52.401 "raid_level": "raid0", 00:07:52.401 "superblock": false, 00:07:52.401 "num_base_bdevs": 3, 00:07:52.401 "num_base_bdevs_discovered": 2, 00:07:52.401 "num_base_bdevs_operational": 2, 00:07:52.401 "base_bdevs_list": [ 00:07:52.401 { 00:07:52.401 "name": null, 00:07:52.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.401 "is_configured": false, 00:07:52.401 "data_offset": 0, 00:07:52.401 "data_size": 65536 00:07:52.401 }, 00:07:52.401 { 00:07:52.401 "name": "BaseBdev2", 00:07:52.401 "uuid": "e898abcd-a6a6-4720-917a-1f193dad8623", 00:07:52.401 "is_configured": true, 00:07:52.401 "data_offset": 0, 00:07:52.401 "data_size": 65536 00:07:52.401 }, 00:07:52.401 { 00:07:52.401 "name": "BaseBdev3", 00:07:52.401 "uuid": "7c0bb1d9-43fa-42c4-aea7-2480dc24b7c7", 00:07:52.401 "is_configured": true, 00:07:52.401 "data_offset": 0, 00:07:52.401 "data_size": 65536 00:07:52.401 } 00:07:52.401 ] 00:07:52.401 }' 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.401 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.008 [2024-11-28 16:20:44.586801] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.008 [2024-11-28 16:20:44.637626] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:53.008 [2024-11-28 16:20:44.637710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.008 BaseBdev2 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.008 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.009 [ 00:07:53.009 { 00:07:53.009 "name": "BaseBdev2", 00:07:53.009 "aliases": [ 00:07:53.009 "8ed2eb8f-4657-43c4-859a-7c107fba6a2a" 00:07:53.009 ], 00:07:53.009 "product_name": "Malloc disk", 00:07:53.009 "block_size": 512, 00:07:53.009 "num_blocks": 65536, 00:07:53.009 "uuid": "8ed2eb8f-4657-43c4-859a-7c107fba6a2a", 00:07:53.009 "assigned_rate_limits": { 00:07:53.009 "rw_ios_per_sec": 0, 00:07:53.009 "rw_mbytes_per_sec": 0, 00:07:53.009 "r_mbytes_per_sec": 0, 00:07:53.009 "w_mbytes_per_sec": 0 00:07:53.009 }, 00:07:53.009 "claimed": false, 00:07:53.009 "zoned": false, 00:07:53.009 "supported_io_types": { 00:07:53.009 "read": true, 00:07:53.009 "write": true, 00:07:53.009 "unmap": true, 00:07:53.009 "flush": true, 00:07:53.009 "reset": true, 00:07:53.009 "nvme_admin": false, 00:07:53.009 "nvme_io": false, 00:07:53.009 "nvme_io_md": false, 00:07:53.009 "write_zeroes": true, 00:07:53.009 "zcopy": true, 00:07:53.009 "get_zone_info": false, 00:07:53.009 "zone_management": false, 00:07:53.009 "zone_append": false, 00:07:53.009 "compare": false, 00:07:53.009 "compare_and_write": false, 00:07:53.009 "abort": true, 00:07:53.009 "seek_hole": false, 00:07:53.009 "seek_data": false, 00:07:53.009 "copy": true, 00:07:53.009 "nvme_iov_md": false 00:07:53.009 }, 00:07:53.009 "memory_domains": [ 00:07:53.009 { 00:07:53.009 "dma_device_id": "system", 00:07:53.009 "dma_device_type": 1 00:07:53.009 }, 00:07:53.009 { 00:07:53.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.009 "dma_device_type": 2 00:07:53.009 } 00:07:53.009 ], 00:07:53.009 "driver_specific": {} 00:07:53.009 } 00:07:53.009 ] 00:07:53.009 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.009 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:53.009 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:53.009 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:53.009 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:53.009 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.009 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.009 BaseBdev3 00:07:53.009 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.009 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:53.009 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:53.009 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:53.009 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:53.009 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:53.009 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:53.009 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:53.009 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.009 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.009 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.009 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:53.009 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.009 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.269 [ 00:07:53.269 { 00:07:53.269 "name": "BaseBdev3", 00:07:53.269 "aliases": [ 00:07:53.269 "c5caa6a9-d2ad-476d-bc27-df4bfc11caac" 00:07:53.269 ], 00:07:53.269 "product_name": "Malloc disk", 00:07:53.269 "block_size": 512, 00:07:53.269 "num_blocks": 65536, 00:07:53.269 "uuid": "c5caa6a9-d2ad-476d-bc27-df4bfc11caac", 00:07:53.269 "assigned_rate_limits": { 00:07:53.269 "rw_ios_per_sec": 0, 00:07:53.269 "rw_mbytes_per_sec": 0, 00:07:53.269 "r_mbytes_per_sec": 0, 00:07:53.269 "w_mbytes_per_sec": 0 00:07:53.269 }, 00:07:53.269 "claimed": false, 00:07:53.269 "zoned": false, 00:07:53.269 "supported_io_types": { 00:07:53.269 "read": true, 00:07:53.269 "write": true, 00:07:53.269 "unmap": true, 00:07:53.269 "flush": true, 00:07:53.269 "reset": true, 00:07:53.269 "nvme_admin": false, 00:07:53.269 "nvme_io": false, 00:07:53.269 "nvme_io_md": false, 00:07:53.269 "write_zeroes": true, 00:07:53.269 "zcopy": true, 00:07:53.269 "get_zone_info": false, 00:07:53.269 "zone_management": false, 00:07:53.269 "zone_append": false, 00:07:53.269 "compare": false, 00:07:53.269 "compare_and_write": false, 00:07:53.269 "abort": true, 00:07:53.269 "seek_hole": false, 00:07:53.269 "seek_data": false, 00:07:53.269 "copy": true, 00:07:53.269 "nvme_iov_md": false 00:07:53.269 }, 00:07:53.269 "memory_domains": [ 00:07:53.269 { 00:07:53.269 "dma_device_id": "system", 00:07:53.269 "dma_device_type": 1 00:07:53.269 }, 00:07:53.269 { 00:07:53.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.269 "dma_device_type": 2 00:07:53.269 } 00:07:53.269 ], 00:07:53.269 "driver_specific": {} 00:07:53.269 } 00:07:53.269 ] 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.269 [2024-11-28 16:20:44.807179] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:53.269 [2024-11-28 16:20:44.807258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:53.269 [2024-11-28 16:20:44.807295] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:53.269 [2024-11-28 16:20:44.808979] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.269 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.269 "name": "Existed_Raid", 00:07:53.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.269 "strip_size_kb": 64, 00:07:53.269 "state": "configuring", 00:07:53.269 "raid_level": "raid0", 00:07:53.269 "superblock": false, 00:07:53.269 "num_base_bdevs": 3, 00:07:53.269 "num_base_bdevs_discovered": 2, 00:07:53.269 "num_base_bdevs_operational": 3, 00:07:53.269 "base_bdevs_list": [ 00:07:53.269 { 00:07:53.269 "name": "BaseBdev1", 00:07:53.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.269 "is_configured": false, 00:07:53.269 "data_offset": 0, 00:07:53.269 "data_size": 0 00:07:53.269 }, 00:07:53.269 { 00:07:53.269 "name": "BaseBdev2", 00:07:53.269 "uuid": "8ed2eb8f-4657-43c4-859a-7c107fba6a2a", 00:07:53.269 "is_configured": true, 00:07:53.269 "data_offset": 0, 00:07:53.269 "data_size": 65536 00:07:53.269 }, 00:07:53.269 { 00:07:53.269 "name": "BaseBdev3", 00:07:53.269 "uuid": "c5caa6a9-d2ad-476d-bc27-df4bfc11caac", 00:07:53.269 "is_configured": true, 00:07:53.269 "data_offset": 0, 00:07:53.269 "data_size": 65536 00:07:53.269 } 00:07:53.269 ] 00:07:53.269 }' 00:07:53.270 16:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.270 16:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.530 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:53.530 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.530 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.530 [2024-11-28 16:20:45.242405] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:53.530 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.530 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:53.530 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.530 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:53.530 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:53.531 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.531 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:53.531 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.531 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.531 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.531 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.531 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.531 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.531 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.531 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.531 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.531 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.531 "name": "Existed_Raid", 00:07:53.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.531 "strip_size_kb": 64, 00:07:53.531 "state": "configuring", 00:07:53.531 "raid_level": "raid0", 00:07:53.531 "superblock": false, 00:07:53.531 "num_base_bdevs": 3, 00:07:53.531 "num_base_bdevs_discovered": 1, 00:07:53.531 "num_base_bdevs_operational": 3, 00:07:53.531 "base_bdevs_list": [ 00:07:53.531 { 00:07:53.531 "name": "BaseBdev1", 00:07:53.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.531 "is_configured": false, 00:07:53.531 "data_offset": 0, 00:07:53.531 "data_size": 0 00:07:53.531 }, 00:07:53.531 { 00:07:53.531 "name": null, 00:07:53.531 "uuid": "8ed2eb8f-4657-43c4-859a-7c107fba6a2a", 00:07:53.531 "is_configured": false, 00:07:53.531 "data_offset": 0, 00:07:53.531 "data_size": 65536 00:07:53.531 }, 00:07:53.531 { 00:07:53.531 "name": "BaseBdev3", 00:07:53.531 "uuid": "c5caa6a9-d2ad-476d-bc27-df4bfc11caac", 00:07:53.531 "is_configured": true, 00:07:53.531 "data_offset": 0, 00:07:53.531 "data_size": 65536 00:07:53.531 } 00:07:53.531 ] 00:07:53.531 }' 00:07:53.531 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.531 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.102 [2024-11-28 16:20:45.696751] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:54.102 BaseBdev1 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.102 [ 00:07:54.102 { 00:07:54.102 "name": "BaseBdev1", 00:07:54.102 "aliases": [ 00:07:54.102 "fcb33825-e627-4c18-a40c-fa67233019fd" 00:07:54.102 ], 00:07:54.102 "product_name": "Malloc disk", 00:07:54.102 "block_size": 512, 00:07:54.102 "num_blocks": 65536, 00:07:54.102 "uuid": "fcb33825-e627-4c18-a40c-fa67233019fd", 00:07:54.102 "assigned_rate_limits": { 00:07:54.102 "rw_ios_per_sec": 0, 00:07:54.102 "rw_mbytes_per_sec": 0, 00:07:54.102 "r_mbytes_per_sec": 0, 00:07:54.102 "w_mbytes_per_sec": 0 00:07:54.102 }, 00:07:54.102 "claimed": true, 00:07:54.102 "claim_type": "exclusive_write", 00:07:54.102 "zoned": false, 00:07:54.102 "supported_io_types": { 00:07:54.102 "read": true, 00:07:54.102 "write": true, 00:07:54.102 "unmap": true, 00:07:54.102 "flush": true, 00:07:54.102 "reset": true, 00:07:54.102 "nvme_admin": false, 00:07:54.102 "nvme_io": false, 00:07:54.102 "nvme_io_md": false, 00:07:54.102 "write_zeroes": true, 00:07:54.102 "zcopy": true, 00:07:54.102 "get_zone_info": false, 00:07:54.102 "zone_management": false, 00:07:54.102 "zone_append": false, 00:07:54.102 "compare": false, 00:07:54.102 "compare_and_write": false, 00:07:54.102 "abort": true, 00:07:54.102 "seek_hole": false, 00:07:54.102 "seek_data": false, 00:07:54.102 "copy": true, 00:07:54.102 "nvme_iov_md": false 00:07:54.102 }, 00:07:54.102 "memory_domains": [ 00:07:54.102 { 00:07:54.102 "dma_device_id": "system", 00:07:54.102 "dma_device_type": 1 00:07:54.102 }, 00:07:54.102 { 00:07:54.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.102 "dma_device_type": 2 00:07:54.102 } 00:07:54.102 ], 00:07:54.102 "driver_specific": {} 00:07:54.102 } 00:07:54.102 ] 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.102 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.102 "name": "Existed_Raid", 00:07:54.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.102 "strip_size_kb": 64, 00:07:54.102 "state": "configuring", 00:07:54.102 "raid_level": "raid0", 00:07:54.102 "superblock": false, 00:07:54.102 "num_base_bdevs": 3, 00:07:54.102 "num_base_bdevs_discovered": 2, 00:07:54.102 "num_base_bdevs_operational": 3, 00:07:54.102 "base_bdevs_list": [ 00:07:54.102 { 00:07:54.102 "name": "BaseBdev1", 00:07:54.102 "uuid": "fcb33825-e627-4c18-a40c-fa67233019fd", 00:07:54.102 "is_configured": true, 00:07:54.102 "data_offset": 0, 00:07:54.102 "data_size": 65536 00:07:54.102 }, 00:07:54.102 { 00:07:54.102 "name": null, 00:07:54.102 "uuid": "8ed2eb8f-4657-43c4-859a-7c107fba6a2a", 00:07:54.102 "is_configured": false, 00:07:54.102 "data_offset": 0, 00:07:54.102 "data_size": 65536 00:07:54.102 }, 00:07:54.102 { 00:07:54.102 "name": "BaseBdev3", 00:07:54.102 "uuid": "c5caa6a9-d2ad-476d-bc27-df4bfc11caac", 00:07:54.102 "is_configured": true, 00:07:54.102 "data_offset": 0, 00:07:54.102 "data_size": 65536 00:07:54.102 } 00:07:54.102 ] 00:07:54.102 }' 00:07:54.103 16:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.103 16:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.363 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.363 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:54.363 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.363 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.363 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.623 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:54.623 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:54.623 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.623 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.623 [2024-11-28 16:20:46.148003] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:54.623 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.623 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:54.623 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.623 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.623 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:54.623 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.623 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:54.623 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.623 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.623 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.623 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.623 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.623 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.623 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.623 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.623 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.623 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.623 "name": "Existed_Raid", 00:07:54.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.623 "strip_size_kb": 64, 00:07:54.623 "state": "configuring", 00:07:54.623 "raid_level": "raid0", 00:07:54.623 "superblock": false, 00:07:54.623 "num_base_bdevs": 3, 00:07:54.623 "num_base_bdevs_discovered": 1, 00:07:54.623 "num_base_bdevs_operational": 3, 00:07:54.623 "base_bdevs_list": [ 00:07:54.623 { 00:07:54.623 "name": "BaseBdev1", 00:07:54.623 "uuid": "fcb33825-e627-4c18-a40c-fa67233019fd", 00:07:54.623 "is_configured": true, 00:07:54.623 "data_offset": 0, 00:07:54.623 "data_size": 65536 00:07:54.623 }, 00:07:54.623 { 00:07:54.623 "name": null, 00:07:54.623 "uuid": "8ed2eb8f-4657-43c4-859a-7c107fba6a2a", 00:07:54.623 "is_configured": false, 00:07:54.623 "data_offset": 0, 00:07:54.623 "data_size": 65536 00:07:54.623 }, 00:07:54.623 { 00:07:54.623 "name": null, 00:07:54.623 "uuid": "c5caa6a9-d2ad-476d-bc27-df4bfc11caac", 00:07:54.623 "is_configured": false, 00:07:54.623 "data_offset": 0, 00:07:54.623 "data_size": 65536 00:07:54.623 } 00:07:54.623 ] 00:07:54.623 }' 00:07:54.623 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.623 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.883 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.883 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.883 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.883 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:54.883 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.883 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:54.883 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:54.883 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.883 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.883 [2024-11-28 16:20:46.647188] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:54.883 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.883 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:55.143 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.143 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.143 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.143 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.143 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:55.143 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.143 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.143 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.143 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.143 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.143 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.143 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.143 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.143 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.143 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.143 "name": "Existed_Raid", 00:07:55.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.143 "strip_size_kb": 64, 00:07:55.143 "state": "configuring", 00:07:55.143 "raid_level": "raid0", 00:07:55.143 "superblock": false, 00:07:55.143 "num_base_bdevs": 3, 00:07:55.143 "num_base_bdevs_discovered": 2, 00:07:55.143 "num_base_bdevs_operational": 3, 00:07:55.143 "base_bdevs_list": [ 00:07:55.143 { 00:07:55.143 "name": "BaseBdev1", 00:07:55.143 "uuid": "fcb33825-e627-4c18-a40c-fa67233019fd", 00:07:55.143 "is_configured": true, 00:07:55.143 "data_offset": 0, 00:07:55.143 "data_size": 65536 00:07:55.143 }, 00:07:55.143 { 00:07:55.143 "name": null, 00:07:55.143 "uuid": "8ed2eb8f-4657-43c4-859a-7c107fba6a2a", 00:07:55.143 "is_configured": false, 00:07:55.143 "data_offset": 0, 00:07:55.143 "data_size": 65536 00:07:55.143 }, 00:07:55.143 { 00:07:55.143 "name": "BaseBdev3", 00:07:55.143 "uuid": "c5caa6a9-d2ad-476d-bc27-df4bfc11caac", 00:07:55.143 "is_configured": true, 00:07:55.143 "data_offset": 0, 00:07:55.143 "data_size": 65536 00:07:55.143 } 00:07:55.143 ] 00:07:55.143 }' 00:07:55.143 16:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.143 16:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.404 [2024-11-28 16:20:47.094425] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.404 "name": "Existed_Raid", 00:07:55.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.404 "strip_size_kb": 64, 00:07:55.404 "state": "configuring", 00:07:55.404 "raid_level": "raid0", 00:07:55.404 "superblock": false, 00:07:55.404 "num_base_bdevs": 3, 00:07:55.404 "num_base_bdevs_discovered": 1, 00:07:55.404 "num_base_bdevs_operational": 3, 00:07:55.404 "base_bdevs_list": [ 00:07:55.404 { 00:07:55.404 "name": null, 00:07:55.404 "uuid": "fcb33825-e627-4c18-a40c-fa67233019fd", 00:07:55.404 "is_configured": false, 00:07:55.404 "data_offset": 0, 00:07:55.404 "data_size": 65536 00:07:55.404 }, 00:07:55.404 { 00:07:55.404 "name": null, 00:07:55.404 "uuid": "8ed2eb8f-4657-43c4-859a-7c107fba6a2a", 00:07:55.404 "is_configured": false, 00:07:55.404 "data_offset": 0, 00:07:55.404 "data_size": 65536 00:07:55.404 }, 00:07:55.404 { 00:07:55.404 "name": "BaseBdev3", 00:07:55.404 "uuid": "c5caa6a9-d2ad-476d-bc27-df4bfc11caac", 00:07:55.404 "is_configured": true, 00:07:55.404 "data_offset": 0, 00:07:55.404 "data_size": 65536 00:07:55.404 } 00:07:55.404 ] 00:07:55.404 }' 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.404 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.974 [2024-11-28 16:20:47.599981] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.974 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.974 "name": "Existed_Raid", 00:07:55.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.974 "strip_size_kb": 64, 00:07:55.974 "state": "configuring", 00:07:55.974 "raid_level": "raid0", 00:07:55.975 "superblock": false, 00:07:55.975 "num_base_bdevs": 3, 00:07:55.975 "num_base_bdevs_discovered": 2, 00:07:55.975 "num_base_bdevs_operational": 3, 00:07:55.975 "base_bdevs_list": [ 00:07:55.975 { 00:07:55.975 "name": null, 00:07:55.975 "uuid": "fcb33825-e627-4c18-a40c-fa67233019fd", 00:07:55.975 "is_configured": false, 00:07:55.975 "data_offset": 0, 00:07:55.975 "data_size": 65536 00:07:55.975 }, 00:07:55.975 { 00:07:55.975 "name": "BaseBdev2", 00:07:55.975 "uuid": "8ed2eb8f-4657-43c4-859a-7c107fba6a2a", 00:07:55.975 "is_configured": true, 00:07:55.975 "data_offset": 0, 00:07:55.975 "data_size": 65536 00:07:55.975 }, 00:07:55.975 { 00:07:55.975 "name": "BaseBdev3", 00:07:55.975 "uuid": "c5caa6a9-d2ad-476d-bc27-df4bfc11caac", 00:07:55.975 "is_configured": true, 00:07:55.975 "data_offset": 0, 00:07:55.975 "data_size": 65536 00:07:55.975 } 00:07:55.975 ] 00:07:55.975 }' 00:07:55.975 16:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.975 16:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.544 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.544 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.544 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.544 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:56.544 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.544 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:56.544 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.544 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:56.544 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.544 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.544 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.544 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fcb33825-e627-4c18-a40c-fa67233019fd 00:07:56.544 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.544 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.544 [2024-11-28 16:20:48.110191] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:56.544 [2024-11-28 16:20:48.110305] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:07:56.544 [2024-11-28 16:20:48.110333] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:56.545 [2024-11-28 16:20:48.110593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:56.545 [2024-11-28 16:20:48.110744] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:07:56.545 [2024-11-28 16:20:48.110784] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:07:56.545 [2024-11-28 16:20:48.111009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.545 NewBaseBdev 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.545 [ 00:07:56.545 { 00:07:56.545 "name": "NewBaseBdev", 00:07:56.545 "aliases": [ 00:07:56.545 "fcb33825-e627-4c18-a40c-fa67233019fd" 00:07:56.545 ], 00:07:56.545 "product_name": "Malloc disk", 00:07:56.545 "block_size": 512, 00:07:56.545 "num_blocks": 65536, 00:07:56.545 "uuid": "fcb33825-e627-4c18-a40c-fa67233019fd", 00:07:56.545 "assigned_rate_limits": { 00:07:56.545 "rw_ios_per_sec": 0, 00:07:56.545 "rw_mbytes_per_sec": 0, 00:07:56.545 "r_mbytes_per_sec": 0, 00:07:56.545 "w_mbytes_per_sec": 0 00:07:56.545 }, 00:07:56.545 "claimed": true, 00:07:56.545 "claim_type": "exclusive_write", 00:07:56.545 "zoned": false, 00:07:56.545 "supported_io_types": { 00:07:56.545 "read": true, 00:07:56.545 "write": true, 00:07:56.545 "unmap": true, 00:07:56.545 "flush": true, 00:07:56.545 "reset": true, 00:07:56.545 "nvme_admin": false, 00:07:56.545 "nvme_io": false, 00:07:56.545 "nvme_io_md": false, 00:07:56.545 "write_zeroes": true, 00:07:56.545 "zcopy": true, 00:07:56.545 "get_zone_info": false, 00:07:56.545 "zone_management": false, 00:07:56.545 "zone_append": false, 00:07:56.545 "compare": false, 00:07:56.545 "compare_and_write": false, 00:07:56.545 "abort": true, 00:07:56.545 "seek_hole": false, 00:07:56.545 "seek_data": false, 00:07:56.545 "copy": true, 00:07:56.545 "nvme_iov_md": false 00:07:56.545 }, 00:07:56.545 "memory_domains": [ 00:07:56.545 { 00:07:56.545 "dma_device_id": "system", 00:07:56.545 "dma_device_type": 1 00:07:56.545 }, 00:07:56.545 { 00:07:56.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.545 "dma_device_type": 2 00:07:56.545 } 00:07:56.545 ], 00:07:56.545 "driver_specific": {} 00:07:56.545 } 00:07:56.545 ] 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.545 "name": "Existed_Raid", 00:07:56.545 "uuid": "4f2463f1-a1b4-4628-8840-76dc2e03d796", 00:07:56.545 "strip_size_kb": 64, 00:07:56.545 "state": "online", 00:07:56.545 "raid_level": "raid0", 00:07:56.545 "superblock": false, 00:07:56.545 "num_base_bdevs": 3, 00:07:56.545 "num_base_bdevs_discovered": 3, 00:07:56.545 "num_base_bdevs_operational": 3, 00:07:56.545 "base_bdevs_list": [ 00:07:56.545 { 00:07:56.545 "name": "NewBaseBdev", 00:07:56.545 "uuid": "fcb33825-e627-4c18-a40c-fa67233019fd", 00:07:56.545 "is_configured": true, 00:07:56.545 "data_offset": 0, 00:07:56.545 "data_size": 65536 00:07:56.545 }, 00:07:56.545 { 00:07:56.545 "name": "BaseBdev2", 00:07:56.545 "uuid": "8ed2eb8f-4657-43c4-859a-7c107fba6a2a", 00:07:56.545 "is_configured": true, 00:07:56.545 "data_offset": 0, 00:07:56.545 "data_size": 65536 00:07:56.545 }, 00:07:56.545 { 00:07:56.545 "name": "BaseBdev3", 00:07:56.545 "uuid": "c5caa6a9-d2ad-476d-bc27-df4bfc11caac", 00:07:56.545 "is_configured": true, 00:07:56.545 "data_offset": 0, 00:07:56.545 "data_size": 65536 00:07:56.545 } 00:07:56.545 ] 00:07:56.545 }' 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.545 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.805 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:56.805 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:56.805 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:56.805 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:56.805 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:56.805 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:56.805 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:56.805 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.805 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.805 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:56.805 [2024-11-28 16:20:48.549773] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.805 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.066 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:57.066 "name": "Existed_Raid", 00:07:57.066 "aliases": [ 00:07:57.066 "4f2463f1-a1b4-4628-8840-76dc2e03d796" 00:07:57.066 ], 00:07:57.066 "product_name": "Raid Volume", 00:07:57.066 "block_size": 512, 00:07:57.066 "num_blocks": 196608, 00:07:57.066 "uuid": "4f2463f1-a1b4-4628-8840-76dc2e03d796", 00:07:57.066 "assigned_rate_limits": { 00:07:57.066 "rw_ios_per_sec": 0, 00:07:57.066 "rw_mbytes_per_sec": 0, 00:07:57.066 "r_mbytes_per_sec": 0, 00:07:57.066 "w_mbytes_per_sec": 0 00:07:57.066 }, 00:07:57.066 "claimed": false, 00:07:57.066 "zoned": false, 00:07:57.066 "supported_io_types": { 00:07:57.066 "read": true, 00:07:57.066 "write": true, 00:07:57.066 "unmap": true, 00:07:57.066 "flush": true, 00:07:57.066 "reset": true, 00:07:57.066 "nvme_admin": false, 00:07:57.066 "nvme_io": false, 00:07:57.066 "nvme_io_md": false, 00:07:57.066 "write_zeroes": true, 00:07:57.066 "zcopy": false, 00:07:57.066 "get_zone_info": false, 00:07:57.066 "zone_management": false, 00:07:57.066 "zone_append": false, 00:07:57.066 "compare": false, 00:07:57.066 "compare_and_write": false, 00:07:57.066 "abort": false, 00:07:57.066 "seek_hole": false, 00:07:57.066 "seek_data": false, 00:07:57.066 "copy": false, 00:07:57.066 "nvme_iov_md": false 00:07:57.066 }, 00:07:57.066 "memory_domains": [ 00:07:57.066 { 00:07:57.066 "dma_device_id": "system", 00:07:57.066 "dma_device_type": 1 00:07:57.066 }, 00:07:57.066 { 00:07:57.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.066 "dma_device_type": 2 00:07:57.066 }, 00:07:57.066 { 00:07:57.066 "dma_device_id": "system", 00:07:57.066 "dma_device_type": 1 00:07:57.066 }, 00:07:57.066 { 00:07:57.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.066 "dma_device_type": 2 00:07:57.066 }, 00:07:57.066 { 00:07:57.066 "dma_device_id": "system", 00:07:57.066 "dma_device_type": 1 00:07:57.066 }, 00:07:57.066 { 00:07:57.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.066 "dma_device_type": 2 00:07:57.066 } 00:07:57.066 ], 00:07:57.066 "driver_specific": { 00:07:57.066 "raid": { 00:07:57.066 "uuid": "4f2463f1-a1b4-4628-8840-76dc2e03d796", 00:07:57.066 "strip_size_kb": 64, 00:07:57.066 "state": "online", 00:07:57.066 "raid_level": "raid0", 00:07:57.066 "superblock": false, 00:07:57.066 "num_base_bdevs": 3, 00:07:57.066 "num_base_bdevs_discovered": 3, 00:07:57.066 "num_base_bdevs_operational": 3, 00:07:57.066 "base_bdevs_list": [ 00:07:57.066 { 00:07:57.066 "name": "NewBaseBdev", 00:07:57.066 "uuid": "fcb33825-e627-4c18-a40c-fa67233019fd", 00:07:57.066 "is_configured": true, 00:07:57.066 "data_offset": 0, 00:07:57.066 "data_size": 65536 00:07:57.066 }, 00:07:57.066 { 00:07:57.066 "name": "BaseBdev2", 00:07:57.066 "uuid": "8ed2eb8f-4657-43c4-859a-7c107fba6a2a", 00:07:57.066 "is_configured": true, 00:07:57.066 "data_offset": 0, 00:07:57.066 "data_size": 65536 00:07:57.066 }, 00:07:57.066 { 00:07:57.066 "name": "BaseBdev3", 00:07:57.066 "uuid": "c5caa6a9-d2ad-476d-bc27-df4bfc11caac", 00:07:57.066 "is_configured": true, 00:07:57.066 "data_offset": 0, 00:07:57.066 "data_size": 65536 00:07:57.066 } 00:07:57.066 ] 00:07:57.066 } 00:07:57.066 } 00:07:57.066 }' 00:07:57.066 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:57.066 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:57.066 BaseBdev2 00:07:57.066 BaseBdev3' 00:07:57.066 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.066 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:57.066 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.066 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:57.066 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.066 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.066 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.066 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.066 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.066 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.066 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.066 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:57.066 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.066 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.066 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.066 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.066 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.066 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.066 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.067 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:57.067 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.067 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.067 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.067 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.067 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.067 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.067 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:57.067 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.067 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.067 [2024-11-28 16:20:48.821028] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:57.067 [2024-11-28 16:20:48.821090] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.067 [2024-11-28 16:20:48.821173] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.067 [2024-11-28 16:20:48.821239] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.067 [2024-11-28 16:20:48.821318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:07:57.067 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.067 16:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75068 00:07:57.067 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 75068 ']' 00:07:57.067 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 75068 00:07:57.067 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:57.067 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:57.327 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75068 00:07:57.327 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:57.327 killing process with pid 75068 00:07:57.327 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:57.327 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75068' 00:07:57.327 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 75068 00:07:57.327 [2024-11-28 16:20:48.862115] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:57.327 16:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 75068 00:07:57.327 [2024-11-28 16:20:48.892773] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:57.588 00:07:57.588 real 0m8.505s 00:07:57.588 user 0m14.485s 00:07:57.588 sys 0m1.704s 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.588 ************************************ 00:07:57.588 END TEST raid_state_function_test 00:07:57.588 ************************************ 00:07:57.588 16:20:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:07:57.588 16:20:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:57.588 16:20:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.588 16:20:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.588 ************************************ 00:07:57.588 START TEST raid_state_function_test_sb 00:07:57.588 ************************************ 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75667 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75667' 00:07:57.588 Process raid pid: 75667 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75667 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75667 ']' 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.588 16:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.588 [2024-11-28 16:20:49.301634] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:57.588 [2024-11-28 16:20:49.301850] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.848 [2024-11-28 16:20:49.461891] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.848 [2024-11-28 16:20:49.505560] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.848 [2024-11-28 16:20:49.546448] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.848 [2024-11-28 16:20:49.546567] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.417 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.417 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:58.417 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:58.417 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.417 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.417 [2024-11-28 16:20:50.127184] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:58.417 [2024-11-28 16:20:50.127269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:58.417 [2024-11-28 16:20:50.127300] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:58.417 [2024-11-28 16:20:50.127321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:58.417 [2024-11-28 16:20:50.127337] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:58.417 [2024-11-28 16:20:50.127360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:58.417 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.417 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:58.417 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.417 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.417 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.417 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.417 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.417 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.417 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.417 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.417 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.417 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.417 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.417 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.417 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.417 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.676 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.676 "name": "Existed_Raid", 00:07:58.676 "uuid": "16a8c72b-12f0-4e1d-bbb8-5eb8ca74fbb8", 00:07:58.676 "strip_size_kb": 64, 00:07:58.676 "state": "configuring", 00:07:58.676 "raid_level": "raid0", 00:07:58.676 "superblock": true, 00:07:58.676 "num_base_bdevs": 3, 00:07:58.676 "num_base_bdevs_discovered": 0, 00:07:58.676 "num_base_bdevs_operational": 3, 00:07:58.676 "base_bdevs_list": [ 00:07:58.676 { 00:07:58.676 "name": "BaseBdev1", 00:07:58.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.676 "is_configured": false, 00:07:58.676 "data_offset": 0, 00:07:58.676 "data_size": 0 00:07:58.676 }, 00:07:58.676 { 00:07:58.676 "name": "BaseBdev2", 00:07:58.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.676 "is_configured": false, 00:07:58.676 "data_offset": 0, 00:07:58.676 "data_size": 0 00:07:58.676 }, 00:07:58.676 { 00:07:58.676 "name": "BaseBdev3", 00:07:58.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.676 "is_configured": false, 00:07:58.676 "data_offset": 0, 00:07:58.677 "data_size": 0 00:07:58.677 } 00:07:58.677 ] 00:07:58.677 }' 00:07:58.677 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.677 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.937 [2024-11-28 16:20:50.558360] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:58.937 [2024-11-28 16:20:50.558486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.937 [2024-11-28 16:20:50.570366] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:58.937 [2024-11-28 16:20:50.570442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:58.937 [2024-11-28 16:20:50.570454] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:58.937 [2024-11-28 16:20:50.570463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:58.937 [2024-11-28 16:20:50.570469] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:58.937 [2024-11-28 16:20:50.570477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.937 [2024-11-28 16:20:50.590978] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.937 BaseBdev1 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.937 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.937 [ 00:07:58.937 { 00:07:58.937 "name": "BaseBdev1", 00:07:58.937 "aliases": [ 00:07:58.937 "403d47b2-4c76-4e02-8afb-6793971f5527" 00:07:58.937 ], 00:07:58.937 "product_name": "Malloc disk", 00:07:58.938 "block_size": 512, 00:07:58.938 "num_blocks": 65536, 00:07:58.938 "uuid": "403d47b2-4c76-4e02-8afb-6793971f5527", 00:07:58.938 "assigned_rate_limits": { 00:07:58.938 "rw_ios_per_sec": 0, 00:07:58.938 "rw_mbytes_per_sec": 0, 00:07:58.938 "r_mbytes_per_sec": 0, 00:07:58.938 "w_mbytes_per_sec": 0 00:07:58.938 }, 00:07:58.938 "claimed": true, 00:07:58.938 "claim_type": "exclusive_write", 00:07:58.938 "zoned": false, 00:07:58.938 "supported_io_types": { 00:07:58.938 "read": true, 00:07:58.938 "write": true, 00:07:58.938 "unmap": true, 00:07:58.938 "flush": true, 00:07:58.938 "reset": true, 00:07:58.938 "nvme_admin": false, 00:07:58.938 "nvme_io": false, 00:07:58.938 "nvme_io_md": false, 00:07:58.938 "write_zeroes": true, 00:07:58.938 "zcopy": true, 00:07:58.938 "get_zone_info": false, 00:07:58.938 "zone_management": false, 00:07:58.938 "zone_append": false, 00:07:58.938 "compare": false, 00:07:58.938 "compare_and_write": false, 00:07:58.938 "abort": true, 00:07:58.938 "seek_hole": false, 00:07:58.938 "seek_data": false, 00:07:58.938 "copy": true, 00:07:58.938 "nvme_iov_md": false 00:07:58.938 }, 00:07:58.938 "memory_domains": [ 00:07:58.938 { 00:07:58.938 "dma_device_id": "system", 00:07:58.938 "dma_device_type": 1 00:07:58.938 }, 00:07:58.938 { 00:07:58.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.938 "dma_device_type": 2 00:07:58.938 } 00:07:58.938 ], 00:07:58.938 "driver_specific": {} 00:07:58.938 } 00:07:58.938 ] 00:07:58.938 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.938 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:58.938 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:58.938 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.938 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.938 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.938 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.938 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.938 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.938 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.938 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.938 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.938 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.938 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.938 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.938 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.938 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.938 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.938 "name": "Existed_Raid", 00:07:58.938 "uuid": "06ba546b-71e1-4f00-9433-017d23017049", 00:07:58.938 "strip_size_kb": 64, 00:07:58.938 "state": "configuring", 00:07:58.938 "raid_level": "raid0", 00:07:58.938 "superblock": true, 00:07:58.938 "num_base_bdevs": 3, 00:07:58.938 "num_base_bdevs_discovered": 1, 00:07:58.938 "num_base_bdevs_operational": 3, 00:07:58.938 "base_bdevs_list": [ 00:07:58.938 { 00:07:58.938 "name": "BaseBdev1", 00:07:58.938 "uuid": "403d47b2-4c76-4e02-8afb-6793971f5527", 00:07:58.938 "is_configured": true, 00:07:58.938 "data_offset": 2048, 00:07:58.938 "data_size": 63488 00:07:58.938 }, 00:07:58.938 { 00:07:58.938 "name": "BaseBdev2", 00:07:58.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.938 "is_configured": false, 00:07:58.938 "data_offset": 0, 00:07:58.938 "data_size": 0 00:07:58.938 }, 00:07:58.938 { 00:07:58.938 "name": "BaseBdev3", 00:07:58.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.938 "is_configured": false, 00:07:58.938 "data_offset": 0, 00:07:58.938 "data_size": 0 00:07:58.938 } 00:07:58.938 ] 00:07:58.938 }' 00:07:58.938 16:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.938 16:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.508 [2024-11-28 16:20:51.066211] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:59.508 [2024-11-28 16:20:51.066302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.508 [2024-11-28 16:20:51.078219] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:59.508 [2024-11-28 16:20:51.080100] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:59.508 [2024-11-28 16:20:51.080174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:59.508 [2024-11-28 16:20:51.080201] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:59.508 [2024-11-28 16:20:51.080224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.508 "name": "Existed_Raid", 00:07:59.508 "uuid": "a7468b2d-2d84-42e9-ab49-20c766cc7d64", 00:07:59.508 "strip_size_kb": 64, 00:07:59.508 "state": "configuring", 00:07:59.508 "raid_level": "raid0", 00:07:59.508 "superblock": true, 00:07:59.508 "num_base_bdevs": 3, 00:07:59.508 "num_base_bdevs_discovered": 1, 00:07:59.508 "num_base_bdevs_operational": 3, 00:07:59.508 "base_bdevs_list": [ 00:07:59.508 { 00:07:59.508 "name": "BaseBdev1", 00:07:59.508 "uuid": "403d47b2-4c76-4e02-8afb-6793971f5527", 00:07:59.508 "is_configured": true, 00:07:59.508 "data_offset": 2048, 00:07:59.508 "data_size": 63488 00:07:59.508 }, 00:07:59.508 { 00:07:59.508 "name": "BaseBdev2", 00:07:59.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.508 "is_configured": false, 00:07:59.508 "data_offset": 0, 00:07:59.508 "data_size": 0 00:07:59.508 }, 00:07:59.508 { 00:07:59.508 "name": "BaseBdev3", 00:07:59.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.508 "is_configured": false, 00:07:59.508 "data_offset": 0, 00:07:59.508 "data_size": 0 00:07:59.508 } 00:07:59.508 ] 00:07:59.508 }' 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.508 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.768 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:59.768 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.768 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.768 [2024-11-28 16:20:51.525569] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:59.768 BaseBdev2 00:07:59.768 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.768 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:59.768 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:59.768 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:59.768 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:59.768 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:59.768 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:59.768 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:59.768 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.768 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.768 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.029 [ 00:08:00.029 { 00:08:00.029 "name": "BaseBdev2", 00:08:00.029 "aliases": [ 00:08:00.029 "9d224e48-82e8-4900-a301-c8b2bd885fb3" 00:08:00.029 ], 00:08:00.029 "product_name": "Malloc disk", 00:08:00.029 "block_size": 512, 00:08:00.029 "num_blocks": 65536, 00:08:00.029 "uuid": "9d224e48-82e8-4900-a301-c8b2bd885fb3", 00:08:00.029 "assigned_rate_limits": { 00:08:00.029 "rw_ios_per_sec": 0, 00:08:00.029 "rw_mbytes_per_sec": 0, 00:08:00.029 "r_mbytes_per_sec": 0, 00:08:00.029 "w_mbytes_per_sec": 0 00:08:00.029 }, 00:08:00.029 "claimed": true, 00:08:00.029 "claim_type": "exclusive_write", 00:08:00.029 "zoned": false, 00:08:00.029 "supported_io_types": { 00:08:00.029 "read": true, 00:08:00.029 "write": true, 00:08:00.029 "unmap": true, 00:08:00.029 "flush": true, 00:08:00.029 "reset": true, 00:08:00.029 "nvme_admin": false, 00:08:00.029 "nvme_io": false, 00:08:00.029 "nvme_io_md": false, 00:08:00.029 "write_zeroes": true, 00:08:00.029 "zcopy": true, 00:08:00.029 "get_zone_info": false, 00:08:00.029 "zone_management": false, 00:08:00.029 "zone_append": false, 00:08:00.029 "compare": false, 00:08:00.029 "compare_and_write": false, 00:08:00.029 "abort": true, 00:08:00.029 "seek_hole": false, 00:08:00.029 "seek_data": false, 00:08:00.029 "copy": true, 00:08:00.029 "nvme_iov_md": false 00:08:00.029 }, 00:08:00.029 "memory_domains": [ 00:08:00.029 { 00:08:00.029 "dma_device_id": "system", 00:08:00.029 "dma_device_type": 1 00:08:00.029 }, 00:08:00.029 { 00:08:00.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.029 "dma_device_type": 2 00:08:00.029 } 00:08:00.029 ], 00:08:00.029 "driver_specific": {} 00:08:00.029 } 00:08:00.029 ] 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.029 "name": "Existed_Raid", 00:08:00.029 "uuid": "a7468b2d-2d84-42e9-ab49-20c766cc7d64", 00:08:00.029 "strip_size_kb": 64, 00:08:00.029 "state": "configuring", 00:08:00.029 "raid_level": "raid0", 00:08:00.029 "superblock": true, 00:08:00.029 "num_base_bdevs": 3, 00:08:00.029 "num_base_bdevs_discovered": 2, 00:08:00.029 "num_base_bdevs_operational": 3, 00:08:00.029 "base_bdevs_list": [ 00:08:00.029 { 00:08:00.029 "name": "BaseBdev1", 00:08:00.029 "uuid": "403d47b2-4c76-4e02-8afb-6793971f5527", 00:08:00.029 "is_configured": true, 00:08:00.029 "data_offset": 2048, 00:08:00.029 "data_size": 63488 00:08:00.029 }, 00:08:00.029 { 00:08:00.029 "name": "BaseBdev2", 00:08:00.029 "uuid": "9d224e48-82e8-4900-a301-c8b2bd885fb3", 00:08:00.029 "is_configured": true, 00:08:00.029 "data_offset": 2048, 00:08:00.029 "data_size": 63488 00:08:00.029 }, 00:08:00.029 { 00:08:00.029 "name": "BaseBdev3", 00:08:00.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.029 "is_configured": false, 00:08:00.029 "data_offset": 0, 00:08:00.029 "data_size": 0 00:08:00.029 } 00:08:00.029 ] 00:08:00.029 }' 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.029 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.289 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:00.289 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.289 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.289 [2024-11-28 16:20:51.955608] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:00.289 [2024-11-28 16:20:51.955864] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:00.289 [2024-11-28 16:20:51.955926] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:00.289 BaseBdev3 00:08:00.289 [2024-11-28 16:20:51.956212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:00.289 [2024-11-28 16:20:51.956338] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:00.289 [2024-11-28 16:20:51.956352] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:00.289 [2024-11-28 16:20:51.956470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.289 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.289 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:00.289 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:00.289 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:00.289 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:00.289 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:00.289 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:00.289 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:00.289 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.289 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.289 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.289 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:00.289 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.289 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.289 [ 00:08:00.289 { 00:08:00.289 "name": "BaseBdev3", 00:08:00.289 "aliases": [ 00:08:00.289 "38e790a2-8309-4c43-88b5-3f11aabfd071" 00:08:00.289 ], 00:08:00.289 "product_name": "Malloc disk", 00:08:00.289 "block_size": 512, 00:08:00.289 "num_blocks": 65536, 00:08:00.289 "uuid": "38e790a2-8309-4c43-88b5-3f11aabfd071", 00:08:00.289 "assigned_rate_limits": { 00:08:00.289 "rw_ios_per_sec": 0, 00:08:00.289 "rw_mbytes_per_sec": 0, 00:08:00.289 "r_mbytes_per_sec": 0, 00:08:00.289 "w_mbytes_per_sec": 0 00:08:00.289 }, 00:08:00.289 "claimed": true, 00:08:00.289 "claim_type": "exclusive_write", 00:08:00.289 "zoned": false, 00:08:00.289 "supported_io_types": { 00:08:00.289 "read": true, 00:08:00.289 "write": true, 00:08:00.289 "unmap": true, 00:08:00.289 "flush": true, 00:08:00.289 "reset": true, 00:08:00.289 "nvme_admin": false, 00:08:00.289 "nvme_io": false, 00:08:00.289 "nvme_io_md": false, 00:08:00.289 "write_zeroes": true, 00:08:00.289 "zcopy": true, 00:08:00.289 "get_zone_info": false, 00:08:00.289 "zone_management": false, 00:08:00.289 "zone_append": false, 00:08:00.289 "compare": false, 00:08:00.289 "compare_and_write": false, 00:08:00.289 "abort": true, 00:08:00.289 "seek_hole": false, 00:08:00.290 "seek_data": false, 00:08:00.290 "copy": true, 00:08:00.290 "nvme_iov_md": false 00:08:00.290 }, 00:08:00.290 "memory_domains": [ 00:08:00.290 { 00:08:00.290 "dma_device_id": "system", 00:08:00.290 "dma_device_type": 1 00:08:00.290 }, 00:08:00.290 { 00:08:00.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.290 "dma_device_type": 2 00:08:00.290 } 00:08:00.290 ], 00:08:00.290 "driver_specific": {} 00:08:00.290 } 00:08:00.290 ] 00:08:00.290 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.290 16:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:00.290 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:00.290 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:00.290 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:00.290 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.290 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.290 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.290 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.290 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.290 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.290 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.290 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.290 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.290 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.290 16:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.290 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.290 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.290 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.290 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.290 "name": "Existed_Raid", 00:08:00.290 "uuid": "a7468b2d-2d84-42e9-ab49-20c766cc7d64", 00:08:00.290 "strip_size_kb": 64, 00:08:00.290 "state": "online", 00:08:00.290 "raid_level": "raid0", 00:08:00.290 "superblock": true, 00:08:00.290 "num_base_bdevs": 3, 00:08:00.290 "num_base_bdevs_discovered": 3, 00:08:00.290 "num_base_bdevs_operational": 3, 00:08:00.290 "base_bdevs_list": [ 00:08:00.290 { 00:08:00.290 "name": "BaseBdev1", 00:08:00.290 "uuid": "403d47b2-4c76-4e02-8afb-6793971f5527", 00:08:00.290 "is_configured": true, 00:08:00.290 "data_offset": 2048, 00:08:00.290 "data_size": 63488 00:08:00.290 }, 00:08:00.290 { 00:08:00.290 "name": "BaseBdev2", 00:08:00.290 "uuid": "9d224e48-82e8-4900-a301-c8b2bd885fb3", 00:08:00.290 "is_configured": true, 00:08:00.290 "data_offset": 2048, 00:08:00.290 "data_size": 63488 00:08:00.290 }, 00:08:00.290 { 00:08:00.290 "name": "BaseBdev3", 00:08:00.290 "uuid": "38e790a2-8309-4c43-88b5-3f11aabfd071", 00:08:00.290 "is_configured": true, 00:08:00.290 "data_offset": 2048, 00:08:00.290 "data_size": 63488 00:08:00.290 } 00:08:00.290 ] 00:08:00.290 }' 00:08:00.290 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.290 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.860 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:00.860 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:00.860 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:00.860 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:00.860 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:00.860 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:00.860 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:00.860 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:00.860 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.860 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.860 [2024-11-28 16:20:52.415134] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:00.860 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.860 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:00.860 "name": "Existed_Raid", 00:08:00.860 "aliases": [ 00:08:00.860 "a7468b2d-2d84-42e9-ab49-20c766cc7d64" 00:08:00.860 ], 00:08:00.860 "product_name": "Raid Volume", 00:08:00.860 "block_size": 512, 00:08:00.860 "num_blocks": 190464, 00:08:00.860 "uuid": "a7468b2d-2d84-42e9-ab49-20c766cc7d64", 00:08:00.860 "assigned_rate_limits": { 00:08:00.860 "rw_ios_per_sec": 0, 00:08:00.860 "rw_mbytes_per_sec": 0, 00:08:00.860 "r_mbytes_per_sec": 0, 00:08:00.860 "w_mbytes_per_sec": 0 00:08:00.860 }, 00:08:00.860 "claimed": false, 00:08:00.860 "zoned": false, 00:08:00.860 "supported_io_types": { 00:08:00.860 "read": true, 00:08:00.860 "write": true, 00:08:00.860 "unmap": true, 00:08:00.860 "flush": true, 00:08:00.860 "reset": true, 00:08:00.860 "nvme_admin": false, 00:08:00.860 "nvme_io": false, 00:08:00.860 "nvme_io_md": false, 00:08:00.860 "write_zeroes": true, 00:08:00.860 "zcopy": false, 00:08:00.860 "get_zone_info": false, 00:08:00.860 "zone_management": false, 00:08:00.860 "zone_append": false, 00:08:00.860 "compare": false, 00:08:00.860 "compare_and_write": false, 00:08:00.860 "abort": false, 00:08:00.860 "seek_hole": false, 00:08:00.860 "seek_data": false, 00:08:00.860 "copy": false, 00:08:00.860 "nvme_iov_md": false 00:08:00.860 }, 00:08:00.860 "memory_domains": [ 00:08:00.860 { 00:08:00.860 "dma_device_id": "system", 00:08:00.860 "dma_device_type": 1 00:08:00.860 }, 00:08:00.860 { 00:08:00.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.860 "dma_device_type": 2 00:08:00.860 }, 00:08:00.860 { 00:08:00.860 "dma_device_id": "system", 00:08:00.860 "dma_device_type": 1 00:08:00.860 }, 00:08:00.860 { 00:08:00.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.860 "dma_device_type": 2 00:08:00.860 }, 00:08:00.860 { 00:08:00.860 "dma_device_id": "system", 00:08:00.860 "dma_device_type": 1 00:08:00.860 }, 00:08:00.860 { 00:08:00.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.860 "dma_device_type": 2 00:08:00.860 } 00:08:00.860 ], 00:08:00.860 "driver_specific": { 00:08:00.860 "raid": { 00:08:00.860 "uuid": "a7468b2d-2d84-42e9-ab49-20c766cc7d64", 00:08:00.860 "strip_size_kb": 64, 00:08:00.860 "state": "online", 00:08:00.860 "raid_level": "raid0", 00:08:00.860 "superblock": true, 00:08:00.860 "num_base_bdevs": 3, 00:08:00.860 "num_base_bdevs_discovered": 3, 00:08:00.860 "num_base_bdevs_operational": 3, 00:08:00.860 "base_bdevs_list": [ 00:08:00.860 { 00:08:00.860 "name": "BaseBdev1", 00:08:00.860 "uuid": "403d47b2-4c76-4e02-8afb-6793971f5527", 00:08:00.860 "is_configured": true, 00:08:00.860 "data_offset": 2048, 00:08:00.860 "data_size": 63488 00:08:00.860 }, 00:08:00.860 { 00:08:00.860 "name": "BaseBdev2", 00:08:00.860 "uuid": "9d224e48-82e8-4900-a301-c8b2bd885fb3", 00:08:00.860 "is_configured": true, 00:08:00.860 "data_offset": 2048, 00:08:00.860 "data_size": 63488 00:08:00.860 }, 00:08:00.860 { 00:08:00.860 "name": "BaseBdev3", 00:08:00.860 "uuid": "38e790a2-8309-4c43-88b5-3f11aabfd071", 00:08:00.860 "is_configured": true, 00:08:00.860 "data_offset": 2048, 00:08:00.860 "data_size": 63488 00:08:00.860 } 00:08:00.860 ] 00:08:00.860 } 00:08:00.860 } 00:08:00.860 }' 00:08:00.860 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:00.860 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:00.860 BaseBdev2 00:08:00.860 BaseBdev3' 00:08:00.860 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.860 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:00.860 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:00.860 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:00.860 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.860 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.860 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.861 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.861 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:00.861 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:00.861 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:00.861 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:00.861 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.861 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.861 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.861 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.861 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:00.861 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:00.861 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:00.861 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:00.861 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.861 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.861 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.861 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.121 [2024-11-28 16:20:52.658547] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:01.121 [2024-11-28 16:20:52.658575] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:01.121 [2024-11-28 16:20:52.658632] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.121 "name": "Existed_Raid", 00:08:01.121 "uuid": "a7468b2d-2d84-42e9-ab49-20c766cc7d64", 00:08:01.121 "strip_size_kb": 64, 00:08:01.121 "state": "offline", 00:08:01.121 "raid_level": "raid0", 00:08:01.121 "superblock": true, 00:08:01.121 "num_base_bdevs": 3, 00:08:01.121 "num_base_bdevs_discovered": 2, 00:08:01.121 "num_base_bdevs_operational": 2, 00:08:01.121 "base_bdevs_list": [ 00:08:01.121 { 00:08:01.121 "name": null, 00:08:01.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.121 "is_configured": false, 00:08:01.121 "data_offset": 0, 00:08:01.121 "data_size": 63488 00:08:01.121 }, 00:08:01.121 { 00:08:01.121 "name": "BaseBdev2", 00:08:01.121 "uuid": "9d224e48-82e8-4900-a301-c8b2bd885fb3", 00:08:01.121 "is_configured": true, 00:08:01.121 "data_offset": 2048, 00:08:01.121 "data_size": 63488 00:08:01.121 }, 00:08:01.121 { 00:08:01.121 "name": "BaseBdev3", 00:08:01.121 "uuid": "38e790a2-8309-4c43-88b5-3f11aabfd071", 00:08:01.121 "is_configured": true, 00:08:01.121 "data_offset": 2048, 00:08:01.121 "data_size": 63488 00:08:01.121 } 00:08:01.121 ] 00:08:01.121 }' 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.121 16:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.380 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:01.380 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:01.380 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.380 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.380 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.380 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:01.380 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.380 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:01.380 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:01.381 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:01.381 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.381 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.640 [2024-11-28 16:20:53.152974] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:01.640 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.640 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:01.640 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:01.640 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.640 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:01.640 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.640 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.640 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.640 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:01.640 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:01.640 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:01.640 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.640 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.640 [2024-11-28 16:20:53.211818] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:01.640 [2024-11-28 16:20:53.211914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:01.640 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.640 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:01.640 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:01.640 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:01.640 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.640 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.640 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.641 BaseBdev2 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.641 [ 00:08:01.641 { 00:08:01.641 "name": "BaseBdev2", 00:08:01.641 "aliases": [ 00:08:01.641 "2bca9411-b9a8-4371-ac9d-a7ea0c43cbc0" 00:08:01.641 ], 00:08:01.641 "product_name": "Malloc disk", 00:08:01.641 "block_size": 512, 00:08:01.641 "num_blocks": 65536, 00:08:01.641 "uuid": "2bca9411-b9a8-4371-ac9d-a7ea0c43cbc0", 00:08:01.641 "assigned_rate_limits": { 00:08:01.641 "rw_ios_per_sec": 0, 00:08:01.641 "rw_mbytes_per_sec": 0, 00:08:01.641 "r_mbytes_per_sec": 0, 00:08:01.641 "w_mbytes_per_sec": 0 00:08:01.641 }, 00:08:01.641 "claimed": false, 00:08:01.641 "zoned": false, 00:08:01.641 "supported_io_types": { 00:08:01.641 "read": true, 00:08:01.641 "write": true, 00:08:01.641 "unmap": true, 00:08:01.641 "flush": true, 00:08:01.641 "reset": true, 00:08:01.641 "nvme_admin": false, 00:08:01.641 "nvme_io": false, 00:08:01.641 "nvme_io_md": false, 00:08:01.641 "write_zeroes": true, 00:08:01.641 "zcopy": true, 00:08:01.641 "get_zone_info": false, 00:08:01.641 "zone_management": false, 00:08:01.641 "zone_append": false, 00:08:01.641 "compare": false, 00:08:01.641 "compare_and_write": false, 00:08:01.641 "abort": true, 00:08:01.641 "seek_hole": false, 00:08:01.641 "seek_data": false, 00:08:01.641 "copy": true, 00:08:01.641 "nvme_iov_md": false 00:08:01.641 }, 00:08:01.641 "memory_domains": [ 00:08:01.641 { 00:08:01.641 "dma_device_id": "system", 00:08:01.641 "dma_device_type": 1 00:08:01.641 }, 00:08:01.641 { 00:08:01.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.641 "dma_device_type": 2 00:08:01.641 } 00:08:01.641 ], 00:08:01.641 "driver_specific": {} 00:08:01.641 } 00:08:01.641 ] 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.641 BaseBdev3 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.641 [ 00:08:01.641 { 00:08:01.641 "name": "BaseBdev3", 00:08:01.641 "aliases": [ 00:08:01.641 "be02a0ca-26f2-4bc7-a970-b9948bc89568" 00:08:01.641 ], 00:08:01.641 "product_name": "Malloc disk", 00:08:01.641 "block_size": 512, 00:08:01.641 "num_blocks": 65536, 00:08:01.641 "uuid": "be02a0ca-26f2-4bc7-a970-b9948bc89568", 00:08:01.641 "assigned_rate_limits": { 00:08:01.641 "rw_ios_per_sec": 0, 00:08:01.641 "rw_mbytes_per_sec": 0, 00:08:01.641 "r_mbytes_per_sec": 0, 00:08:01.641 "w_mbytes_per_sec": 0 00:08:01.641 }, 00:08:01.641 "claimed": false, 00:08:01.641 "zoned": false, 00:08:01.641 "supported_io_types": { 00:08:01.641 "read": true, 00:08:01.641 "write": true, 00:08:01.641 "unmap": true, 00:08:01.641 "flush": true, 00:08:01.641 "reset": true, 00:08:01.641 "nvme_admin": false, 00:08:01.641 "nvme_io": false, 00:08:01.641 "nvme_io_md": false, 00:08:01.641 "write_zeroes": true, 00:08:01.641 "zcopy": true, 00:08:01.641 "get_zone_info": false, 00:08:01.641 "zone_management": false, 00:08:01.641 "zone_append": false, 00:08:01.641 "compare": false, 00:08:01.641 "compare_and_write": false, 00:08:01.641 "abort": true, 00:08:01.641 "seek_hole": false, 00:08:01.641 "seek_data": false, 00:08:01.641 "copy": true, 00:08:01.641 "nvme_iov_md": false 00:08:01.641 }, 00:08:01.641 "memory_domains": [ 00:08:01.641 { 00:08:01.641 "dma_device_id": "system", 00:08:01.641 "dma_device_type": 1 00:08:01.641 }, 00:08:01.641 { 00:08:01.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.641 "dma_device_type": 2 00:08:01.641 } 00:08:01.641 ], 00:08:01.641 "driver_specific": {} 00:08:01.641 } 00:08:01.641 ] 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.641 [2024-11-28 16:20:53.366607] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:01.641 [2024-11-28 16:20:53.366689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:01.641 [2024-11-28 16:20:53.366727] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:01.641 [2024-11-28 16:20:53.368490] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.641 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.642 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.642 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.642 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.642 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.642 "name": "Existed_Raid", 00:08:01.642 "uuid": "9a05a494-06ca-49cb-9208-ecce3720ed85", 00:08:01.642 "strip_size_kb": 64, 00:08:01.642 "state": "configuring", 00:08:01.642 "raid_level": "raid0", 00:08:01.642 "superblock": true, 00:08:01.642 "num_base_bdevs": 3, 00:08:01.642 "num_base_bdevs_discovered": 2, 00:08:01.642 "num_base_bdevs_operational": 3, 00:08:01.642 "base_bdevs_list": [ 00:08:01.642 { 00:08:01.642 "name": "BaseBdev1", 00:08:01.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.642 "is_configured": false, 00:08:01.642 "data_offset": 0, 00:08:01.642 "data_size": 0 00:08:01.642 }, 00:08:01.642 { 00:08:01.642 "name": "BaseBdev2", 00:08:01.642 "uuid": "2bca9411-b9a8-4371-ac9d-a7ea0c43cbc0", 00:08:01.642 "is_configured": true, 00:08:01.642 "data_offset": 2048, 00:08:01.642 "data_size": 63488 00:08:01.642 }, 00:08:01.642 { 00:08:01.642 "name": "BaseBdev3", 00:08:01.642 "uuid": "be02a0ca-26f2-4bc7-a970-b9948bc89568", 00:08:01.642 "is_configured": true, 00:08:01.642 "data_offset": 2048, 00:08:01.642 "data_size": 63488 00:08:01.642 } 00:08:01.642 ] 00:08:01.642 }' 00:08:01.642 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.642 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.211 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:02.211 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.211 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.211 [2024-11-28 16:20:53.809824] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:02.211 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.211 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:02.211 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.211 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.211 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.211 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.211 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.211 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.211 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.211 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.211 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.211 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.211 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.211 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.211 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.211 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.211 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.211 "name": "Existed_Raid", 00:08:02.211 "uuid": "9a05a494-06ca-49cb-9208-ecce3720ed85", 00:08:02.211 "strip_size_kb": 64, 00:08:02.211 "state": "configuring", 00:08:02.211 "raid_level": "raid0", 00:08:02.211 "superblock": true, 00:08:02.211 "num_base_bdevs": 3, 00:08:02.211 "num_base_bdevs_discovered": 1, 00:08:02.211 "num_base_bdevs_operational": 3, 00:08:02.211 "base_bdevs_list": [ 00:08:02.211 { 00:08:02.211 "name": "BaseBdev1", 00:08:02.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.211 "is_configured": false, 00:08:02.211 "data_offset": 0, 00:08:02.211 "data_size": 0 00:08:02.211 }, 00:08:02.211 { 00:08:02.211 "name": null, 00:08:02.211 "uuid": "2bca9411-b9a8-4371-ac9d-a7ea0c43cbc0", 00:08:02.211 "is_configured": false, 00:08:02.211 "data_offset": 0, 00:08:02.211 "data_size": 63488 00:08:02.211 }, 00:08:02.211 { 00:08:02.211 "name": "BaseBdev3", 00:08:02.211 "uuid": "be02a0ca-26f2-4bc7-a970-b9948bc89568", 00:08:02.211 "is_configured": true, 00:08:02.211 "data_offset": 2048, 00:08:02.211 "data_size": 63488 00:08:02.211 } 00:08:02.211 ] 00:08:02.211 }' 00:08:02.211 16:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.211 16:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.471 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:02.471 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.471 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.471 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.471 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.731 [2024-11-28 16:20:54.255946] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.731 BaseBdev1 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.731 [ 00:08:02.731 { 00:08:02.731 "name": "BaseBdev1", 00:08:02.731 "aliases": [ 00:08:02.731 "2798b5f8-ea54-42bb-a3a8-e79ef1dd0638" 00:08:02.731 ], 00:08:02.731 "product_name": "Malloc disk", 00:08:02.731 "block_size": 512, 00:08:02.731 "num_blocks": 65536, 00:08:02.731 "uuid": "2798b5f8-ea54-42bb-a3a8-e79ef1dd0638", 00:08:02.731 "assigned_rate_limits": { 00:08:02.731 "rw_ios_per_sec": 0, 00:08:02.731 "rw_mbytes_per_sec": 0, 00:08:02.731 "r_mbytes_per_sec": 0, 00:08:02.731 "w_mbytes_per_sec": 0 00:08:02.731 }, 00:08:02.731 "claimed": true, 00:08:02.731 "claim_type": "exclusive_write", 00:08:02.731 "zoned": false, 00:08:02.731 "supported_io_types": { 00:08:02.731 "read": true, 00:08:02.731 "write": true, 00:08:02.731 "unmap": true, 00:08:02.731 "flush": true, 00:08:02.731 "reset": true, 00:08:02.731 "nvme_admin": false, 00:08:02.731 "nvme_io": false, 00:08:02.731 "nvme_io_md": false, 00:08:02.731 "write_zeroes": true, 00:08:02.731 "zcopy": true, 00:08:02.731 "get_zone_info": false, 00:08:02.731 "zone_management": false, 00:08:02.731 "zone_append": false, 00:08:02.731 "compare": false, 00:08:02.731 "compare_and_write": false, 00:08:02.731 "abort": true, 00:08:02.731 "seek_hole": false, 00:08:02.731 "seek_data": false, 00:08:02.731 "copy": true, 00:08:02.731 "nvme_iov_md": false 00:08:02.731 }, 00:08:02.731 "memory_domains": [ 00:08:02.731 { 00:08:02.731 "dma_device_id": "system", 00:08:02.731 "dma_device_type": 1 00:08:02.731 }, 00:08:02.731 { 00:08:02.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.731 "dma_device_type": 2 00:08:02.731 } 00:08:02.731 ], 00:08:02.731 "driver_specific": {} 00:08:02.731 } 00:08:02.731 ] 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.731 "name": "Existed_Raid", 00:08:02.731 "uuid": "9a05a494-06ca-49cb-9208-ecce3720ed85", 00:08:02.731 "strip_size_kb": 64, 00:08:02.731 "state": "configuring", 00:08:02.731 "raid_level": "raid0", 00:08:02.731 "superblock": true, 00:08:02.731 "num_base_bdevs": 3, 00:08:02.731 "num_base_bdevs_discovered": 2, 00:08:02.731 "num_base_bdevs_operational": 3, 00:08:02.731 "base_bdevs_list": [ 00:08:02.731 { 00:08:02.731 "name": "BaseBdev1", 00:08:02.731 "uuid": "2798b5f8-ea54-42bb-a3a8-e79ef1dd0638", 00:08:02.731 "is_configured": true, 00:08:02.731 "data_offset": 2048, 00:08:02.731 "data_size": 63488 00:08:02.731 }, 00:08:02.731 { 00:08:02.731 "name": null, 00:08:02.731 "uuid": "2bca9411-b9a8-4371-ac9d-a7ea0c43cbc0", 00:08:02.731 "is_configured": false, 00:08:02.731 "data_offset": 0, 00:08:02.731 "data_size": 63488 00:08:02.731 }, 00:08:02.731 { 00:08:02.731 "name": "BaseBdev3", 00:08:02.731 "uuid": "be02a0ca-26f2-4bc7-a970-b9948bc89568", 00:08:02.731 "is_configured": true, 00:08:02.731 "data_offset": 2048, 00:08:02.731 "data_size": 63488 00:08:02.731 } 00:08:02.731 ] 00:08:02.731 }' 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.731 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.991 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:02.991 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.991 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.991 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.991 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.991 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:02.991 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:02.991 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.991 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.991 [2024-11-28 16:20:54.755304] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:02.991 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.991 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:03.250 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.250 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.250 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.250 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.250 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.250 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.250 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.250 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.250 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.250 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.250 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.250 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.250 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.250 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.250 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.250 "name": "Existed_Raid", 00:08:03.250 "uuid": "9a05a494-06ca-49cb-9208-ecce3720ed85", 00:08:03.250 "strip_size_kb": 64, 00:08:03.250 "state": "configuring", 00:08:03.250 "raid_level": "raid0", 00:08:03.250 "superblock": true, 00:08:03.250 "num_base_bdevs": 3, 00:08:03.250 "num_base_bdevs_discovered": 1, 00:08:03.250 "num_base_bdevs_operational": 3, 00:08:03.250 "base_bdevs_list": [ 00:08:03.250 { 00:08:03.251 "name": "BaseBdev1", 00:08:03.251 "uuid": "2798b5f8-ea54-42bb-a3a8-e79ef1dd0638", 00:08:03.251 "is_configured": true, 00:08:03.251 "data_offset": 2048, 00:08:03.251 "data_size": 63488 00:08:03.251 }, 00:08:03.251 { 00:08:03.251 "name": null, 00:08:03.251 "uuid": "2bca9411-b9a8-4371-ac9d-a7ea0c43cbc0", 00:08:03.251 "is_configured": false, 00:08:03.251 "data_offset": 0, 00:08:03.251 "data_size": 63488 00:08:03.251 }, 00:08:03.251 { 00:08:03.251 "name": null, 00:08:03.251 "uuid": "be02a0ca-26f2-4bc7-a970-b9948bc89568", 00:08:03.251 "is_configured": false, 00:08:03.251 "data_offset": 0, 00:08:03.251 "data_size": 63488 00:08:03.251 } 00:08:03.251 ] 00:08:03.251 }' 00:08:03.251 16:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.251 16:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.512 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:03.512 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.512 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.512 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.512 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.512 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:03.512 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:03.512 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.512 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.513 [2024-11-28 16:20:55.214543] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:03.513 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.513 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:03.513 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.513 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.513 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.513 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.513 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.513 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.513 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.513 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.513 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.513 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.513 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.513 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.513 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.513 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.513 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.513 "name": "Existed_Raid", 00:08:03.513 "uuid": "9a05a494-06ca-49cb-9208-ecce3720ed85", 00:08:03.513 "strip_size_kb": 64, 00:08:03.513 "state": "configuring", 00:08:03.513 "raid_level": "raid0", 00:08:03.513 "superblock": true, 00:08:03.513 "num_base_bdevs": 3, 00:08:03.513 "num_base_bdevs_discovered": 2, 00:08:03.513 "num_base_bdevs_operational": 3, 00:08:03.513 "base_bdevs_list": [ 00:08:03.513 { 00:08:03.513 "name": "BaseBdev1", 00:08:03.513 "uuid": "2798b5f8-ea54-42bb-a3a8-e79ef1dd0638", 00:08:03.513 "is_configured": true, 00:08:03.513 "data_offset": 2048, 00:08:03.513 "data_size": 63488 00:08:03.513 }, 00:08:03.513 { 00:08:03.513 "name": null, 00:08:03.513 "uuid": "2bca9411-b9a8-4371-ac9d-a7ea0c43cbc0", 00:08:03.513 "is_configured": false, 00:08:03.513 "data_offset": 0, 00:08:03.513 "data_size": 63488 00:08:03.513 }, 00:08:03.513 { 00:08:03.513 "name": "BaseBdev3", 00:08:03.513 "uuid": "be02a0ca-26f2-4bc7-a970-b9948bc89568", 00:08:03.513 "is_configured": true, 00:08:03.513 "data_offset": 2048, 00:08:03.513 "data_size": 63488 00:08:03.513 } 00:08:03.513 ] 00:08:03.513 }' 00:08:03.513 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.513 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.134 [2024-11-28 16:20:55.689732] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.134 "name": "Existed_Raid", 00:08:04.134 "uuid": "9a05a494-06ca-49cb-9208-ecce3720ed85", 00:08:04.134 "strip_size_kb": 64, 00:08:04.134 "state": "configuring", 00:08:04.134 "raid_level": "raid0", 00:08:04.134 "superblock": true, 00:08:04.134 "num_base_bdevs": 3, 00:08:04.134 "num_base_bdevs_discovered": 1, 00:08:04.134 "num_base_bdevs_operational": 3, 00:08:04.134 "base_bdevs_list": [ 00:08:04.134 { 00:08:04.134 "name": null, 00:08:04.134 "uuid": "2798b5f8-ea54-42bb-a3a8-e79ef1dd0638", 00:08:04.134 "is_configured": false, 00:08:04.134 "data_offset": 0, 00:08:04.134 "data_size": 63488 00:08:04.134 }, 00:08:04.134 { 00:08:04.134 "name": null, 00:08:04.134 "uuid": "2bca9411-b9a8-4371-ac9d-a7ea0c43cbc0", 00:08:04.134 "is_configured": false, 00:08:04.134 "data_offset": 0, 00:08:04.134 "data_size": 63488 00:08:04.134 }, 00:08:04.134 { 00:08:04.134 "name": "BaseBdev3", 00:08:04.134 "uuid": "be02a0ca-26f2-4bc7-a970-b9948bc89568", 00:08:04.134 "is_configured": true, 00:08:04.134 "data_offset": 2048, 00:08:04.134 "data_size": 63488 00:08:04.134 } 00:08:04.134 ] 00:08:04.134 }' 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.134 16:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.395 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:04.395 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.395 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.395 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.395 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.655 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:04.655 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:04.655 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.655 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.655 [2024-11-28 16:20:56.191465] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:04.655 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.655 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:04.655 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.655 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.655 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.655 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.655 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.655 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.655 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.655 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.655 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.655 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.655 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.655 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.655 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.655 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.655 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.655 "name": "Existed_Raid", 00:08:04.655 "uuid": "9a05a494-06ca-49cb-9208-ecce3720ed85", 00:08:04.655 "strip_size_kb": 64, 00:08:04.655 "state": "configuring", 00:08:04.655 "raid_level": "raid0", 00:08:04.655 "superblock": true, 00:08:04.655 "num_base_bdevs": 3, 00:08:04.655 "num_base_bdevs_discovered": 2, 00:08:04.655 "num_base_bdevs_operational": 3, 00:08:04.655 "base_bdevs_list": [ 00:08:04.655 { 00:08:04.655 "name": null, 00:08:04.655 "uuid": "2798b5f8-ea54-42bb-a3a8-e79ef1dd0638", 00:08:04.655 "is_configured": false, 00:08:04.655 "data_offset": 0, 00:08:04.655 "data_size": 63488 00:08:04.655 }, 00:08:04.655 { 00:08:04.655 "name": "BaseBdev2", 00:08:04.655 "uuid": "2bca9411-b9a8-4371-ac9d-a7ea0c43cbc0", 00:08:04.655 "is_configured": true, 00:08:04.655 "data_offset": 2048, 00:08:04.655 "data_size": 63488 00:08:04.655 }, 00:08:04.655 { 00:08:04.655 "name": "BaseBdev3", 00:08:04.655 "uuid": "be02a0ca-26f2-4bc7-a970-b9948bc89568", 00:08:04.655 "is_configured": true, 00:08:04.655 "data_offset": 2048, 00:08:04.655 "data_size": 63488 00:08:04.655 } 00:08:04.655 ] 00:08:04.655 }' 00:08:04.655 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.655 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.915 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.915 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:04.915 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.915 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.915 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.915 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:04.915 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.915 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:04.915 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.915 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.915 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2798b5f8-ea54-42bb-a3a8-e79ef1dd0638 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.175 [2024-11-28 16:20:56.697459] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:05.175 [2024-11-28 16:20:56.697673] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:05.175 [2024-11-28 16:20:56.697723] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:05.175 [2024-11-28 16:20:56.697990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:05.175 NewBaseBdev 00:08:05.175 [2024-11-28 16:20:56.698136] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:05.175 [2024-11-28 16:20:56.698147] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:05.175 [2024-11-28 16:20:56.698247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.175 [ 00:08:05.175 { 00:08:05.175 "name": "NewBaseBdev", 00:08:05.175 "aliases": [ 00:08:05.175 "2798b5f8-ea54-42bb-a3a8-e79ef1dd0638" 00:08:05.175 ], 00:08:05.175 "product_name": "Malloc disk", 00:08:05.175 "block_size": 512, 00:08:05.175 "num_blocks": 65536, 00:08:05.175 "uuid": "2798b5f8-ea54-42bb-a3a8-e79ef1dd0638", 00:08:05.175 "assigned_rate_limits": { 00:08:05.175 "rw_ios_per_sec": 0, 00:08:05.175 "rw_mbytes_per_sec": 0, 00:08:05.175 "r_mbytes_per_sec": 0, 00:08:05.175 "w_mbytes_per_sec": 0 00:08:05.175 }, 00:08:05.175 "claimed": true, 00:08:05.175 "claim_type": "exclusive_write", 00:08:05.175 "zoned": false, 00:08:05.175 "supported_io_types": { 00:08:05.175 "read": true, 00:08:05.175 "write": true, 00:08:05.175 "unmap": true, 00:08:05.175 "flush": true, 00:08:05.175 "reset": true, 00:08:05.175 "nvme_admin": false, 00:08:05.175 "nvme_io": false, 00:08:05.175 "nvme_io_md": false, 00:08:05.175 "write_zeroes": true, 00:08:05.175 "zcopy": true, 00:08:05.175 "get_zone_info": false, 00:08:05.175 "zone_management": false, 00:08:05.175 "zone_append": false, 00:08:05.175 "compare": false, 00:08:05.175 "compare_and_write": false, 00:08:05.175 "abort": true, 00:08:05.175 "seek_hole": false, 00:08:05.175 "seek_data": false, 00:08:05.175 "copy": true, 00:08:05.175 "nvme_iov_md": false 00:08:05.175 }, 00:08:05.175 "memory_domains": [ 00:08:05.175 { 00:08:05.175 "dma_device_id": "system", 00:08:05.175 "dma_device_type": 1 00:08:05.175 }, 00:08:05.175 { 00:08:05.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.175 "dma_device_type": 2 00:08:05.175 } 00:08:05.175 ], 00:08:05.175 "driver_specific": {} 00:08:05.175 } 00:08:05.175 ] 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.175 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.176 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.176 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.176 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.176 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.176 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.176 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.176 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.176 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.176 "name": "Existed_Raid", 00:08:05.176 "uuid": "9a05a494-06ca-49cb-9208-ecce3720ed85", 00:08:05.176 "strip_size_kb": 64, 00:08:05.176 "state": "online", 00:08:05.176 "raid_level": "raid0", 00:08:05.176 "superblock": true, 00:08:05.176 "num_base_bdevs": 3, 00:08:05.176 "num_base_bdevs_discovered": 3, 00:08:05.176 "num_base_bdevs_operational": 3, 00:08:05.176 "base_bdevs_list": [ 00:08:05.176 { 00:08:05.176 "name": "NewBaseBdev", 00:08:05.176 "uuid": "2798b5f8-ea54-42bb-a3a8-e79ef1dd0638", 00:08:05.176 "is_configured": true, 00:08:05.176 "data_offset": 2048, 00:08:05.176 "data_size": 63488 00:08:05.176 }, 00:08:05.176 { 00:08:05.176 "name": "BaseBdev2", 00:08:05.176 "uuid": "2bca9411-b9a8-4371-ac9d-a7ea0c43cbc0", 00:08:05.176 "is_configured": true, 00:08:05.176 "data_offset": 2048, 00:08:05.176 "data_size": 63488 00:08:05.176 }, 00:08:05.176 { 00:08:05.176 "name": "BaseBdev3", 00:08:05.176 "uuid": "be02a0ca-26f2-4bc7-a970-b9948bc89568", 00:08:05.176 "is_configured": true, 00:08:05.176 "data_offset": 2048, 00:08:05.176 "data_size": 63488 00:08:05.176 } 00:08:05.176 ] 00:08:05.176 }' 00:08:05.176 16:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.176 16:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.438 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:05.438 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:05.438 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:05.438 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:05.438 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:05.438 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:05.438 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:05.438 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:05.438 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.438 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.438 [2024-11-28 16:20:57.172929] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.438 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:05.698 "name": "Existed_Raid", 00:08:05.698 "aliases": [ 00:08:05.698 "9a05a494-06ca-49cb-9208-ecce3720ed85" 00:08:05.698 ], 00:08:05.698 "product_name": "Raid Volume", 00:08:05.698 "block_size": 512, 00:08:05.698 "num_blocks": 190464, 00:08:05.698 "uuid": "9a05a494-06ca-49cb-9208-ecce3720ed85", 00:08:05.698 "assigned_rate_limits": { 00:08:05.698 "rw_ios_per_sec": 0, 00:08:05.698 "rw_mbytes_per_sec": 0, 00:08:05.698 "r_mbytes_per_sec": 0, 00:08:05.698 "w_mbytes_per_sec": 0 00:08:05.698 }, 00:08:05.698 "claimed": false, 00:08:05.698 "zoned": false, 00:08:05.698 "supported_io_types": { 00:08:05.698 "read": true, 00:08:05.698 "write": true, 00:08:05.698 "unmap": true, 00:08:05.698 "flush": true, 00:08:05.698 "reset": true, 00:08:05.698 "nvme_admin": false, 00:08:05.698 "nvme_io": false, 00:08:05.698 "nvme_io_md": false, 00:08:05.698 "write_zeroes": true, 00:08:05.698 "zcopy": false, 00:08:05.698 "get_zone_info": false, 00:08:05.698 "zone_management": false, 00:08:05.698 "zone_append": false, 00:08:05.698 "compare": false, 00:08:05.698 "compare_and_write": false, 00:08:05.698 "abort": false, 00:08:05.698 "seek_hole": false, 00:08:05.698 "seek_data": false, 00:08:05.698 "copy": false, 00:08:05.698 "nvme_iov_md": false 00:08:05.698 }, 00:08:05.698 "memory_domains": [ 00:08:05.698 { 00:08:05.698 "dma_device_id": "system", 00:08:05.698 "dma_device_type": 1 00:08:05.698 }, 00:08:05.698 { 00:08:05.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.698 "dma_device_type": 2 00:08:05.698 }, 00:08:05.698 { 00:08:05.698 "dma_device_id": "system", 00:08:05.698 "dma_device_type": 1 00:08:05.698 }, 00:08:05.698 { 00:08:05.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.698 "dma_device_type": 2 00:08:05.698 }, 00:08:05.698 { 00:08:05.698 "dma_device_id": "system", 00:08:05.698 "dma_device_type": 1 00:08:05.698 }, 00:08:05.698 { 00:08:05.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.698 "dma_device_type": 2 00:08:05.698 } 00:08:05.698 ], 00:08:05.698 "driver_specific": { 00:08:05.698 "raid": { 00:08:05.698 "uuid": "9a05a494-06ca-49cb-9208-ecce3720ed85", 00:08:05.698 "strip_size_kb": 64, 00:08:05.698 "state": "online", 00:08:05.698 "raid_level": "raid0", 00:08:05.698 "superblock": true, 00:08:05.698 "num_base_bdevs": 3, 00:08:05.698 "num_base_bdevs_discovered": 3, 00:08:05.698 "num_base_bdevs_operational": 3, 00:08:05.698 "base_bdevs_list": [ 00:08:05.698 { 00:08:05.698 "name": "NewBaseBdev", 00:08:05.698 "uuid": "2798b5f8-ea54-42bb-a3a8-e79ef1dd0638", 00:08:05.698 "is_configured": true, 00:08:05.698 "data_offset": 2048, 00:08:05.698 "data_size": 63488 00:08:05.698 }, 00:08:05.698 { 00:08:05.698 "name": "BaseBdev2", 00:08:05.698 "uuid": "2bca9411-b9a8-4371-ac9d-a7ea0c43cbc0", 00:08:05.698 "is_configured": true, 00:08:05.698 "data_offset": 2048, 00:08:05.698 "data_size": 63488 00:08:05.698 }, 00:08:05.698 { 00:08:05.698 "name": "BaseBdev3", 00:08:05.698 "uuid": "be02a0ca-26f2-4bc7-a970-b9948bc89568", 00:08:05.698 "is_configured": true, 00:08:05.698 "data_offset": 2048, 00:08:05.698 "data_size": 63488 00:08:05.698 } 00:08:05.698 ] 00:08:05.698 } 00:08:05.698 } 00:08:05.698 }' 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:05.698 BaseBdev2 00:08:05.698 BaseBdev3' 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.698 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.699 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.699 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.699 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.699 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:05.699 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.699 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.699 [2024-11-28 16:20:57.408257] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:05.699 [2024-11-28 16:20:57.408322] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:05.699 [2024-11-28 16:20:57.408412] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.699 [2024-11-28 16:20:57.408489] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:05.699 [2024-11-28 16:20:57.408537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:05.699 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.699 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75667 00:08:05.699 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75667 ']' 00:08:05.699 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 75667 00:08:05.699 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:05.699 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:05.699 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75667 00:08:05.699 killing process with pid 75667 00:08:05.699 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:05.699 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:05.699 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75667' 00:08:05.699 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 75667 00:08:05.699 [2024-11-28 16:20:57.461821] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:05.699 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 75667 00:08:05.958 [2024-11-28 16:20:57.492456] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:05.958 16:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:05.958 00:08:05.958 real 0m8.512s 00:08:05.958 user 0m14.535s 00:08:05.958 sys 0m1.745s 00:08:05.958 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.958 16:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.958 ************************************ 00:08:05.958 END TEST raid_state_function_test_sb 00:08:05.958 ************************************ 00:08:06.218 16:20:57 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:06.218 16:20:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:06.218 16:20:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.218 16:20:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.218 ************************************ 00:08:06.218 START TEST raid_superblock_test 00:08:06.218 ************************************ 00:08:06.218 16:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:08:06.218 16:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:06.218 16:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:06.218 16:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:06.218 16:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:06.218 16:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:06.218 16:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:06.218 16:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:06.218 16:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:06.218 16:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:06.218 16:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:06.218 16:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:06.219 16:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:06.219 16:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:06.219 16:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:06.219 16:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:06.219 16:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:06.219 16:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76271 00:08:06.219 16:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:06.219 16:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76271 00:08:06.219 16:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 76271 ']' 00:08:06.219 16:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.219 16:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:06.219 16:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.219 16:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:06.219 16:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.219 [2024-11-28 16:20:57.877849] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:06.219 [2024-11-28 16:20:57.878066] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76271 ] 00:08:06.478 [2024-11-28 16:20:58.037530] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.478 [2024-11-28 16:20:58.080916] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.478 [2024-11-28 16:20:58.121749] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.478 [2024-11-28 16:20:58.121793] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.080 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:07.080 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:07.080 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:07.080 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:07.080 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:07.080 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:07.080 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:07.080 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:07.080 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:07.080 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:07.080 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:07.080 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.080 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.080 malloc1 00:08:07.080 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.080 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:07.080 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.080 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.080 [2024-11-28 16:20:58.707092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:07.080 [2024-11-28 16:20:58.707202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.080 [2024-11-28 16:20:58.707248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:07.080 [2024-11-28 16:20:58.707285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.080 [2024-11-28 16:20:58.709278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.080 [2024-11-28 16:20:58.709349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:07.080 pt1 00:08:07.080 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.081 malloc2 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.081 [2024-11-28 16:20:58.755273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:07.081 [2024-11-28 16:20:58.755456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.081 [2024-11-28 16:20:58.755531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:07.081 [2024-11-28 16:20:58.755607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.081 [2024-11-28 16:20:58.760174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.081 [2024-11-28 16:20:58.760275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:07.081 pt2 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.081 malloc3 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.081 [2024-11-28 16:20:58.785481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:07.081 [2024-11-28 16:20:58.785566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.081 [2024-11-28 16:20:58.785596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:07.081 [2024-11-28 16:20:58.785625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.081 [2024-11-28 16:20:58.787548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.081 [2024-11-28 16:20:58.787614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:07.081 pt3 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.081 [2024-11-28 16:20:58.797501] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:07.081 [2024-11-28 16:20:58.799226] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:07.081 [2024-11-28 16:20:58.799320] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:07.081 [2024-11-28 16:20:58.799467] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:07.081 [2024-11-28 16:20:58.799528] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:07.081 [2024-11-28 16:20:58.799791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:07.081 [2024-11-28 16:20:58.799951] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:07.081 [2024-11-28 16:20:58.799996] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:07.081 [2024-11-28 16:20:58.800148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.081 "name": "raid_bdev1", 00:08:07.081 "uuid": "5675db9a-ff60-4b9b-ae00-058bd62897c8", 00:08:07.081 "strip_size_kb": 64, 00:08:07.081 "state": "online", 00:08:07.081 "raid_level": "raid0", 00:08:07.081 "superblock": true, 00:08:07.081 "num_base_bdevs": 3, 00:08:07.081 "num_base_bdevs_discovered": 3, 00:08:07.081 "num_base_bdevs_operational": 3, 00:08:07.081 "base_bdevs_list": [ 00:08:07.081 { 00:08:07.081 "name": "pt1", 00:08:07.081 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:07.081 "is_configured": true, 00:08:07.081 "data_offset": 2048, 00:08:07.081 "data_size": 63488 00:08:07.081 }, 00:08:07.081 { 00:08:07.081 "name": "pt2", 00:08:07.081 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:07.081 "is_configured": true, 00:08:07.081 "data_offset": 2048, 00:08:07.081 "data_size": 63488 00:08:07.081 }, 00:08:07.081 { 00:08:07.081 "name": "pt3", 00:08:07.081 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:07.081 "is_configured": true, 00:08:07.081 "data_offset": 2048, 00:08:07.081 "data_size": 63488 00:08:07.081 } 00:08:07.081 ] 00:08:07.081 }' 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.081 16:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.649 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:07.649 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:07.649 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:07.649 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:07.649 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:07.649 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:07.649 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:07.649 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.649 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.649 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:07.649 [2024-11-28 16:20:59.261000] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:07.649 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.650 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:07.650 "name": "raid_bdev1", 00:08:07.650 "aliases": [ 00:08:07.650 "5675db9a-ff60-4b9b-ae00-058bd62897c8" 00:08:07.650 ], 00:08:07.650 "product_name": "Raid Volume", 00:08:07.650 "block_size": 512, 00:08:07.650 "num_blocks": 190464, 00:08:07.650 "uuid": "5675db9a-ff60-4b9b-ae00-058bd62897c8", 00:08:07.650 "assigned_rate_limits": { 00:08:07.650 "rw_ios_per_sec": 0, 00:08:07.650 "rw_mbytes_per_sec": 0, 00:08:07.650 "r_mbytes_per_sec": 0, 00:08:07.650 "w_mbytes_per_sec": 0 00:08:07.650 }, 00:08:07.650 "claimed": false, 00:08:07.650 "zoned": false, 00:08:07.650 "supported_io_types": { 00:08:07.650 "read": true, 00:08:07.650 "write": true, 00:08:07.650 "unmap": true, 00:08:07.650 "flush": true, 00:08:07.650 "reset": true, 00:08:07.650 "nvme_admin": false, 00:08:07.650 "nvme_io": false, 00:08:07.650 "nvme_io_md": false, 00:08:07.650 "write_zeroes": true, 00:08:07.650 "zcopy": false, 00:08:07.650 "get_zone_info": false, 00:08:07.650 "zone_management": false, 00:08:07.650 "zone_append": false, 00:08:07.650 "compare": false, 00:08:07.650 "compare_and_write": false, 00:08:07.650 "abort": false, 00:08:07.650 "seek_hole": false, 00:08:07.650 "seek_data": false, 00:08:07.650 "copy": false, 00:08:07.650 "nvme_iov_md": false 00:08:07.650 }, 00:08:07.650 "memory_domains": [ 00:08:07.650 { 00:08:07.650 "dma_device_id": "system", 00:08:07.650 "dma_device_type": 1 00:08:07.650 }, 00:08:07.650 { 00:08:07.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.650 "dma_device_type": 2 00:08:07.650 }, 00:08:07.650 { 00:08:07.650 "dma_device_id": "system", 00:08:07.650 "dma_device_type": 1 00:08:07.650 }, 00:08:07.650 { 00:08:07.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.650 "dma_device_type": 2 00:08:07.650 }, 00:08:07.650 { 00:08:07.650 "dma_device_id": "system", 00:08:07.650 "dma_device_type": 1 00:08:07.650 }, 00:08:07.650 { 00:08:07.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.650 "dma_device_type": 2 00:08:07.650 } 00:08:07.650 ], 00:08:07.650 "driver_specific": { 00:08:07.650 "raid": { 00:08:07.650 "uuid": "5675db9a-ff60-4b9b-ae00-058bd62897c8", 00:08:07.650 "strip_size_kb": 64, 00:08:07.650 "state": "online", 00:08:07.650 "raid_level": "raid0", 00:08:07.650 "superblock": true, 00:08:07.650 "num_base_bdevs": 3, 00:08:07.650 "num_base_bdevs_discovered": 3, 00:08:07.650 "num_base_bdevs_operational": 3, 00:08:07.650 "base_bdevs_list": [ 00:08:07.650 { 00:08:07.650 "name": "pt1", 00:08:07.650 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:07.650 "is_configured": true, 00:08:07.650 "data_offset": 2048, 00:08:07.650 "data_size": 63488 00:08:07.650 }, 00:08:07.650 { 00:08:07.650 "name": "pt2", 00:08:07.650 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:07.650 "is_configured": true, 00:08:07.650 "data_offset": 2048, 00:08:07.650 "data_size": 63488 00:08:07.650 }, 00:08:07.650 { 00:08:07.650 "name": "pt3", 00:08:07.650 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:07.650 "is_configured": true, 00:08:07.650 "data_offset": 2048, 00:08:07.650 "data_size": 63488 00:08:07.650 } 00:08:07.650 ] 00:08:07.650 } 00:08:07.650 } 00:08:07.650 }' 00:08:07.650 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:07.650 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:07.650 pt2 00:08:07.650 pt3' 00:08:07.650 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.650 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:07.650 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.650 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:07.650 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.650 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.650 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.650 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.650 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.650 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.650 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.650 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:07.650 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.650 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.650 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.909 [2024-11-28 16:20:59.520536] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5675db9a-ff60-4b9b-ae00-058bd62897c8 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5675db9a-ff60-4b9b-ae00-058bd62897c8 ']' 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.909 [2024-11-28 16:20:59.556166] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:07.909 [2024-11-28 16:20:59.556231] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:07.909 [2024-11-28 16:20:59.556321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:07.909 [2024-11-28 16:20:59.556397] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:07.909 [2024-11-28 16:20:59.556434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:07.909 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:07.910 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.910 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.910 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.910 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:07.910 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:07.910 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.910 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.910 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.910 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:07.910 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:07.910 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.910 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.169 [2024-11-28 16:20:59.707925] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:08.169 [2024-11-28 16:20:59.709755] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:08.169 [2024-11-28 16:20:59.709850] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:08.169 [2024-11-28 16:20:59.709919] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:08.169 [2024-11-28 16:20:59.710030] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:08.169 [2024-11-28 16:20:59.710091] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:08.169 [2024-11-28 16:20:59.710143] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:08.169 [2024-11-28 16:20:59.710173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:08.169 request: 00:08:08.169 { 00:08:08.169 "name": "raid_bdev1", 00:08:08.169 "raid_level": "raid0", 00:08:08.169 "base_bdevs": [ 00:08:08.169 "malloc1", 00:08:08.169 "malloc2", 00:08:08.169 "malloc3" 00:08:08.169 ], 00:08:08.169 "strip_size_kb": 64, 00:08:08.169 "superblock": false, 00:08:08.169 "method": "bdev_raid_create", 00:08:08.169 "req_id": 1 00:08:08.169 } 00:08:08.169 Got JSON-RPC error response 00:08:08.169 response: 00:08:08.169 { 00:08:08.169 "code": -17, 00:08:08.169 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:08.169 } 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.169 [2024-11-28 16:20:59.775780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:08.169 [2024-11-28 16:20:59.775867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.169 [2024-11-28 16:20:59.775899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:08.169 [2024-11-28 16:20:59.775927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.169 [2024-11-28 16:20:59.777926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.169 [2024-11-28 16:20:59.777994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:08.169 [2024-11-28 16:20:59.778074] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:08.169 [2024-11-28 16:20:59.778123] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:08.169 pt1 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.169 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.169 "name": "raid_bdev1", 00:08:08.169 "uuid": "5675db9a-ff60-4b9b-ae00-058bd62897c8", 00:08:08.169 "strip_size_kb": 64, 00:08:08.169 "state": "configuring", 00:08:08.169 "raid_level": "raid0", 00:08:08.169 "superblock": true, 00:08:08.169 "num_base_bdevs": 3, 00:08:08.169 "num_base_bdevs_discovered": 1, 00:08:08.169 "num_base_bdevs_operational": 3, 00:08:08.169 "base_bdevs_list": [ 00:08:08.169 { 00:08:08.169 "name": "pt1", 00:08:08.169 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:08.169 "is_configured": true, 00:08:08.170 "data_offset": 2048, 00:08:08.170 "data_size": 63488 00:08:08.170 }, 00:08:08.170 { 00:08:08.170 "name": null, 00:08:08.170 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:08.170 "is_configured": false, 00:08:08.170 "data_offset": 2048, 00:08:08.170 "data_size": 63488 00:08:08.170 }, 00:08:08.170 { 00:08:08.170 "name": null, 00:08:08.170 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:08.170 "is_configured": false, 00:08:08.170 "data_offset": 2048, 00:08:08.170 "data_size": 63488 00:08:08.170 } 00:08:08.170 ] 00:08:08.170 }' 00:08:08.170 16:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.170 16:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.429 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:08.429 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:08.429 16:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.429 16:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.429 [2024-11-28 16:21:00.175176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:08.429 [2024-11-28 16:21:00.175265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.429 [2024-11-28 16:21:00.175299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:08.429 [2024-11-28 16:21:00.175330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.429 [2024-11-28 16:21:00.175706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.429 [2024-11-28 16:21:00.175765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:08.429 [2024-11-28 16:21:00.175862] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:08.429 [2024-11-28 16:21:00.175914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:08.429 pt2 00:08:08.429 16:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.429 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:08.429 16:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.429 16:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.429 [2024-11-28 16:21:00.187159] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:08.429 16:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.429 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:08.429 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.429 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.429 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.429 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.429 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.429 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.429 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.429 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.429 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.690 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.690 16:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.690 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.690 16:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.690 16:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.690 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.690 "name": "raid_bdev1", 00:08:08.690 "uuid": "5675db9a-ff60-4b9b-ae00-058bd62897c8", 00:08:08.690 "strip_size_kb": 64, 00:08:08.690 "state": "configuring", 00:08:08.690 "raid_level": "raid0", 00:08:08.690 "superblock": true, 00:08:08.690 "num_base_bdevs": 3, 00:08:08.690 "num_base_bdevs_discovered": 1, 00:08:08.690 "num_base_bdevs_operational": 3, 00:08:08.690 "base_bdevs_list": [ 00:08:08.690 { 00:08:08.690 "name": "pt1", 00:08:08.690 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:08.690 "is_configured": true, 00:08:08.690 "data_offset": 2048, 00:08:08.690 "data_size": 63488 00:08:08.690 }, 00:08:08.690 { 00:08:08.690 "name": null, 00:08:08.690 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:08.690 "is_configured": false, 00:08:08.690 "data_offset": 0, 00:08:08.690 "data_size": 63488 00:08:08.690 }, 00:08:08.690 { 00:08:08.690 "name": null, 00:08:08.690 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:08.690 "is_configured": false, 00:08:08.690 "data_offset": 2048, 00:08:08.690 "data_size": 63488 00:08:08.690 } 00:08:08.690 ] 00:08:08.690 }' 00:08:08.690 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.690 16:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.951 [2024-11-28 16:21:00.666470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:08.951 [2024-11-28 16:21:00.666576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.951 [2024-11-28 16:21:00.666610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:08.951 [2024-11-28 16:21:00.666637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.951 [2024-11-28 16:21:00.667057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.951 [2024-11-28 16:21:00.667114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:08.951 [2024-11-28 16:21:00.667225] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:08.951 [2024-11-28 16:21:00.667273] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:08.951 pt2 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.951 [2024-11-28 16:21:00.674405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:08.951 [2024-11-28 16:21:00.674482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.951 [2024-11-28 16:21:00.674514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:08.951 [2024-11-28 16:21:00.674544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.951 [2024-11-28 16:21:00.674879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.951 [2024-11-28 16:21:00.674930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:08.951 [2024-11-28 16:21:00.675012] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:08.951 [2024-11-28 16:21:00.675055] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:08.951 [2024-11-28 16:21:00.675163] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:08.951 [2024-11-28 16:21:00.675198] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:08.951 [2024-11-28 16:21:00.675418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:08.951 [2024-11-28 16:21:00.675519] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:08.951 [2024-11-28 16:21:00.675530] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:08.951 [2024-11-28 16:21:00.675623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.951 pt3 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.951 16:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.211 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.211 "name": "raid_bdev1", 00:08:09.211 "uuid": "5675db9a-ff60-4b9b-ae00-058bd62897c8", 00:08:09.212 "strip_size_kb": 64, 00:08:09.212 "state": "online", 00:08:09.212 "raid_level": "raid0", 00:08:09.212 "superblock": true, 00:08:09.212 "num_base_bdevs": 3, 00:08:09.212 "num_base_bdevs_discovered": 3, 00:08:09.212 "num_base_bdevs_operational": 3, 00:08:09.212 "base_bdevs_list": [ 00:08:09.212 { 00:08:09.212 "name": "pt1", 00:08:09.212 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:09.212 "is_configured": true, 00:08:09.212 "data_offset": 2048, 00:08:09.212 "data_size": 63488 00:08:09.212 }, 00:08:09.212 { 00:08:09.212 "name": "pt2", 00:08:09.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:09.212 "is_configured": true, 00:08:09.212 "data_offset": 2048, 00:08:09.212 "data_size": 63488 00:08:09.212 }, 00:08:09.212 { 00:08:09.212 "name": "pt3", 00:08:09.212 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:09.212 "is_configured": true, 00:08:09.212 "data_offset": 2048, 00:08:09.212 "data_size": 63488 00:08:09.212 } 00:08:09.212 ] 00:08:09.212 }' 00:08:09.212 16:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.212 16:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.472 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:09.472 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:09.472 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:09.472 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:09.472 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:09.472 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:09.472 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:09.472 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:09.472 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.472 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.472 [2024-11-28 16:21:01.109969] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.472 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.472 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:09.472 "name": "raid_bdev1", 00:08:09.472 "aliases": [ 00:08:09.472 "5675db9a-ff60-4b9b-ae00-058bd62897c8" 00:08:09.472 ], 00:08:09.472 "product_name": "Raid Volume", 00:08:09.472 "block_size": 512, 00:08:09.472 "num_blocks": 190464, 00:08:09.472 "uuid": "5675db9a-ff60-4b9b-ae00-058bd62897c8", 00:08:09.472 "assigned_rate_limits": { 00:08:09.472 "rw_ios_per_sec": 0, 00:08:09.472 "rw_mbytes_per_sec": 0, 00:08:09.472 "r_mbytes_per_sec": 0, 00:08:09.472 "w_mbytes_per_sec": 0 00:08:09.472 }, 00:08:09.472 "claimed": false, 00:08:09.472 "zoned": false, 00:08:09.472 "supported_io_types": { 00:08:09.473 "read": true, 00:08:09.473 "write": true, 00:08:09.473 "unmap": true, 00:08:09.473 "flush": true, 00:08:09.473 "reset": true, 00:08:09.473 "nvme_admin": false, 00:08:09.473 "nvme_io": false, 00:08:09.473 "nvme_io_md": false, 00:08:09.473 "write_zeroes": true, 00:08:09.473 "zcopy": false, 00:08:09.473 "get_zone_info": false, 00:08:09.473 "zone_management": false, 00:08:09.473 "zone_append": false, 00:08:09.473 "compare": false, 00:08:09.473 "compare_and_write": false, 00:08:09.473 "abort": false, 00:08:09.473 "seek_hole": false, 00:08:09.473 "seek_data": false, 00:08:09.473 "copy": false, 00:08:09.473 "nvme_iov_md": false 00:08:09.473 }, 00:08:09.473 "memory_domains": [ 00:08:09.473 { 00:08:09.473 "dma_device_id": "system", 00:08:09.473 "dma_device_type": 1 00:08:09.473 }, 00:08:09.473 { 00:08:09.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.473 "dma_device_type": 2 00:08:09.473 }, 00:08:09.473 { 00:08:09.473 "dma_device_id": "system", 00:08:09.473 "dma_device_type": 1 00:08:09.473 }, 00:08:09.473 { 00:08:09.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.473 "dma_device_type": 2 00:08:09.473 }, 00:08:09.473 { 00:08:09.473 "dma_device_id": "system", 00:08:09.473 "dma_device_type": 1 00:08:09.473 }, 00:08:09.473 { 00:08:09.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.473 "dma_device_type": 2 00:08:09.473 } 00:08:09.473 ], 00:08:09.473 "driver_specific": { 00:08:09.473 "raid": { 00:08:09.473 "uuid": "5675db9a-ff60-4b9b-ae00-058bd62897c8", 00:08:09.473 "strip_size_kb": 64, 00:08:09.473 "state": "online", 00:08:09.473 "raid_level": "raid0", 00:08:09.473 "superblock": true, 00:08:09.473 "num_base_bdevs": 3, 00:08:09.473 "num_base_bdevs_discovered": 3, 00:08:09.473 "num_base_bdevs_operational": 3, 00:08:09.473 "base_bdevs_list": [ 00:08:09.473 { 00:08:09.473 "name": "pt1", 00:08:09.473 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:09.473 "is_configured": true, 00:08:09.473 "data_offset": 2048, 00:08:09.473 "data_size": 63488 00:08:09.473 }, 00:08:09.473 { 00:08:09.473 "name": "pt2", 00:08:09.473 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:09.473 "is_configured": true, 00:08:09.473 "data_offset": 2048, 00:08:09.473 "data_size": 63488 00:08:09.473 }, 00:08:09.473 { 00:08:09.473 "name": "pt3", 00:08:09.473 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:09.473 "is_configured": true, 00:08:09.473 "data_offset": 2048, 00:08:09.473 "data_size": 63488 00:08:09.473 } 00:08:09.473 ] 00:08:09.473 } 00:08:09.473 } 00:08:09.473 }' 00:08:09.473 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:09.473 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:09.473 pt2 00:08:09.473 pt3' 00:08:09.473 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.473 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:09.473 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.473 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.473 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:09.473 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.473 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.473 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.734 [2024-11-28 16:21:01.361449] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5675db9a-ff60-4b9b-ae00-058bd62897c8 '!=' 5675db9a-ff60-4b9b-ae00-058bd62897c8 ']' 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76271 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 76271 ']' 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 76271 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76271 00:08:09.734 killing process with pid 76271 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76271' 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 76271 00:08:09.734 [2024-11-28 16:21:01.431213] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:09.734 [2024-11-28 16:21:01.431287] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.734 [2024-11-28 16:21:01.431343] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:09.734 [2024-11-28 16:21:01.431351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:09.734 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 76271 00:08:09.734 [2024-11-28 16:21:01.464111] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:09.994 16:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:09.994 00:08:09.994 real 0m3.912s 00:08:09.994 user 0m6.130s 00:08:09.994 sys 0m0.868s 00:08:09.994 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.994 ************************************ 00:08:09.994 END TEST raid_superblock_test 00:08:09.994 ************************************ 00:08:09.994 16:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.994 16:21:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:09.994 16:21:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:09.994 16:21:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.994 16:21:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:10.255 ************************************ 00:08:10.255 START TEST raid_read_error_test 00:08:10.255 ************************************ 00:08:10.255 16:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:08:10.255 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:10.255 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:10.255 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:10.255 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6RSkRYxyTQ 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76513 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76513 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 76513 ']' 00:08:10.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.256 16:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.256 [2024-11-28 16:21:01.872262] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:10.256 [2024-11-28 16:21:01.872384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76513 ] 00:08:10.516 [2024-11-28 16:21:02.032776] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.516 [2024-11-28 16:21:02.077229] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.516 [2024-11-28 16:21:02.119663] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.516 [2024-11-28 16:21:02.119730] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.086 BaseBdev1_malloc 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.086 true 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.086 [2024-11-28 16:21:02.730144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:11.086 [2024-11-28 16:21:02.730198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.086 [2024-11-28 16:21:02.730219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:11.086 [2024-11-28 16:21:02.730234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.086 [2024-11-28 16:21:02.732327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.086 [2024-11-28 16:21:02.732364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:11.086 BaseBdev1 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.086 BaseBdev2_malloc 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.086 true 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.086 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.086 [2024-11-28 16:21:02.780656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:11.086 [2024-11-28 16:21:02.780748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.086 [2024-11-28 16:21:02.780770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:11.087 [2024-11-28 16:21:02.780779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.087 [2024-11-28 16:21:02.782824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.087 [2024-11-28 16:21:02.782869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:11.087 BaseBdev2 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.087 BaseBdev3_malloc 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.087 true 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.087 [2024-11-28 16:21:02.821518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:11.087 [2024-11-28 16:21:02.821602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.087 [2024-11-28 16:21:02.821623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:11.087 [2024-11-28 16:21:02.821631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.087 [2024-11-28 16:21:02.823658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.087 [2024-11-28 16:21:02.823720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:11.087 BaseBdev3 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.087 [2024-11-28 16:21:02.833558] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.087 [2024-11-28 16:21:02.835304] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.087 [2024-11-28 16:21:02.835380] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:11.087 [2024-11-28 16:21:02.835545] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:11.087 [2024-11-28 16:21:02.835559] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:11.087 [2024-11-28 16:21:02.835796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:11.087 [2024-11-28 16:21:02.835933] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:11.087 [2024-11-28 16:21:02.835944] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:11.087 [2024-11-28 16:21:02.836063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.087 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.347 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.347 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.347 "name": "raid_bdev1", 00:08:11.347 "uuid": "9cf87f3e-10c1-404c-988e-111ec49a93c3", 00:08:11.347 "strip_size_kb": 64, 00:08:11.347 "state": "online", 00:08:11.347 "raid_level": "raid0", 00:08:11.347 "superblock": true, 00:08:11.347 "num_base_bdevs": 3, 00:08:11.347 "num_base_bdevs_discovered": 3, 00:08:11.347 "num_base_bdevs_operational": 3, 00:08:11.347 "base_bdevs_list": [ 00:08:11.347 { 00:08:11.347 "name": "BaseBdev1", 00:08:11.347 "uuid": "cfca36ac-c5ad-50ea-acbf-52bf50bb609d", 00:08:11.347 "is_configured": true, 00:08:11.347 "data_offset": 2048, 00:08:11.347 "data_size": 63488 00:08:11.347 }, 00:08:11.347 { 00:08:11.347 "name": "BaseBdev2", 00:08:11.347 "uuid": "3d2125d3-948d-50cc-9934-1014279c2cdf", 00:08:11.347 "is_configured": true, 00:08:11.347 "data_offset": 2048, 00:08:11.347 "data_size": 63488 00:08:11.347 }, 00:08:11.347 { 00:08:11.347 "name": "BaseBdev3", 00:08:11.347 "uuid": "ae2a44ac-da83-5091-aa68-81fa65565d02", 00:08:11.347 "is_configured": true, 00:08:11.347 "data_offset": 2048, 00:08:11.347 "data_size": 63488 00:08:11.347 } 00:08:11.347 ] 00:08:11.347 }' 00:08:11.347 16:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.347 16:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.607 16:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:11.607 16:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:11.607 [2024-11-28 16:21:03.309094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.548 "name": "raid_bdev1", 00:08:12.548 "uuid": "9cf87f3e-10c1-404c-988e-111ec49a93c3", 00:08:12.548 "strip_size_kb": 64, 00:08:12.548 "state": "online", 00:08:12.548 "raid_level": "raid0", 00:08:12.548 "superblock": true, 00:08:12.548 "num_base_bdevs": 3, 00:08:12.548 "num_base_bdevs_discovered": 3, 00:08:12.548 "num_base_bdevs_operational": 3, 00:08:12.548 "base_bdevs_list": [ 00:08:12.548 { 00:08:12.548 "name": "BaseBdev1", 00:08:12.548 "uuid": "cfca36ac-c5ad-50ea-acbf-52bf50bb609d", 00:08:12.548 "is_configured": true, 00:08:12.548 "data_offset": 2048, 00:08:12.548 "data_size": 63488 00:08:12.548 }, 00:08:12.548 { 00:08:12.548 "name": "BaseBdev2", 00:08:12.548 "uuid": "3d2125d3-948d-50cc-9934-1014279c2cdf", 00:08:12.548 "is_configured": true, 00:08:12.548 "data_offset": 2048, 00:08:12.548 "data_size": 63488 00:08:12.548 }, 00:08:12.548 { 00:08:12.548 "name": "BaseBdev3", 00:08:12.548 "uuid": "ae2a44ac-da83-5091-aa68-81fa65565d02", 00:08:12.548 "is_configured": true, 00:08:12.548 "data_offset": 2048, 00:08:12.548 "data_size": 63488 00:08:12.548 } 00:08:12.548 ] 00:08:12.548 }' 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.548 16:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.119 16:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:13.119 16:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.119 16:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.119 [2024-11-28 16:21:04.692590] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.119 [2024-11-28 16:21:04.692689] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.119 [2024-11-28 16:21:04.695142] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.119 [2024-11-28 16:21:04.695239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.119 [2024-11-28 16:21:04.695293] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.119 [2024-11-28 16:21:04.695335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:13.119 { 00:08:13.119 "results": [ 00:08:13.119 { 00:08:13.119 "job": "raid_bdev1", 00:08:13.119 "core_mask": "0x1", 00:08:13.119 "workload": "randrw", 00:08:13.119 "percentage": 50, 00:08:13.119 "status": "finished", 00:08:13.119 "queue_depth": 1, 00:08:13.119 "io_size": 131072, 00:08:13.119 "runtime": 1.384471, 00:08:13.119 "iops": 17609.614069200437, 00:08:13.119 "mibps": 2201.2017586500547, 00:08:13.119 "io_failed": 1, 00:08:13.119 "io_timeout": 0, 00:08:13.119 "avg_latency_us": 78.75904169776415, 00:08:13.119 "min_latency_us": 18.55720524017467, 00:08:13.119 "max_latency_us": 1380.8349344978167 00:08:13.119 } 00:08:13.119 ], 00:08:13.119 "core_count": 1 00:08:13.119 } 00:08:13.119 16:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.119 16:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76513 00:08:13.119 16:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 76513 ']' 00:08:13.119 16:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 76513 00:08:13.119 16:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:13.119 16:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:13.119 16:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76513 00:08:13.119 16:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:13.119 16:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:13.119 16:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76513' 00:08:13.119 killing process with pid 76513 00:08:13.119 16:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 76513 00:08:13.119 [2024-11-28 16:21:04.733969] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:13.119 16:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 76513 00:08:13.119 [2024-11-28 16:21:04.759309] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:13.380 16:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6RSkRYxyTQ 00:08:13.380 16:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:13.380 16:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:13.380 16:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:13.380 ************************************ 00:08:13.380 END TEST raid_read_error_test 00:08:13.380 ************************************ 00:08:13.380 16:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:13.380 16:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:13.380 16:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:13.380 16:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:13.380 00:08:13.380 real 0m3.230s 00:08:13.380 user 0m4.042s 00:08:13.380 sys 0m0.530s 00:08:13.380 16:21:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.380 16:21:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.380 16:21:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:13.380 16:21:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:13.380 16:21:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.380 16:21:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:13.380 ************************************ 00:08:13.380 START TEST raid_write_error_test 00:08:13.380 ************************************ 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qdsSJeQQN2 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76642 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76642 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 76642 ']' 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.380 16:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.640 [2024-11-28 16:21:05.187642] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:13.640 [2024-11-28 16:21:05.187875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76642 ] 00:08:13.640 [2024-11-28 16:21:05.348291] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.640 [2024-11-28 16:21:05.395269] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.899 [2024-11-28 16:21:05.438434] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.899 [2024-11-28 16:21:05.438559] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.469 BaseBdev1_malloc 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.469 true 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.469 [2024-11-28 16:21:06.053194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:14.469 [2024-11-28 16:21:06.053289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.469 [2024-11-28 16:21:06.053327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:14.469 [2024-11-28 16:21:06.053356] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.469 [2024-11-28 16:21:06.055472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.469 [2024-11-28 16:21:06.055545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:14.469 BaseBdev1 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.469 BaseBdev2_malloc 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.469 true 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.469 [2024-11-28 16:21:06.103523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:14.469 [2024-11-28 16:21:06.103614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.469 [2024-11-28 16:21:06.103649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:14.469 [2024-11-28 16:21:06.103676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.469 [2024-11-28 16:21:06.105723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.469 [2024-11-28 16:21:06.105793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:14.469 BaseBdev2 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.469 BaseBdev3_malloc 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:14.469 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.470 true 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.470 [2024-11-28 16:21:06.144259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:14.470 [2024-11-28 16:21:06.144306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.470 [2024-11-28 16:21:06.144324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:14.470 [2024-11-28 16:21:06.144333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.470 [2024-11-28 16:21:06.146311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.470 [2024-11-28 16:21:06.146349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:14.470 BaseBdev3 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.470 [2024-11-28 16:21:06.156309] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.470 [2024-11-28 16:21:06.158055] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:14.470 [2024-11-28 16:21:06.158129] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:14.470 [2024-11-28 16:21:06.158291] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:14.470 [2024-11-28 16:21:06.158306] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:14.470 [2024-11-28 16:21:06.158532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:14.470 [2024-11-28 16:21:06.158668] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:14.470 [2024-11-28 16:21:06.158678] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:14.470 [2024-11-28 16:21:06.158783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.470 "name": "raid_bdev1", 00:08:14.470 "uuid": "80380cb3-79bb-4d8c-9aa1-696e11759b79", 00:08:14.470 "strip_size_kb": 64, 00:08:14.470 "state": "online", 00:08:14.470 "raid_level": "raid0", 00:08:14.470 "superblock": true, 00:08:14.470 "num_base_bdevs": 3, 00:08:14.470 "num_base_bdevs_discovered": 3, 00:08:14.470 "num_base_bdevs_operational": 3, 00:08:14.470 "base_bdevs_list": [ 00:08:14.470 { 00:08:14.470 "name": "BaseBdev1", 00:08:14.470 "uuid": "36464b2b-1413-5ba5-983b-1718db2ad4a8", 00:08:14.470 "is_configured": true, 00:08:14.470 "data_offset": 2048, 00:08:14.470 "data_size": 63488 00:08:14.470 }, 00:08:14.470 { 00:08:14.470 "name": "BaseBdev2", 00:08:14.470 "uuid": "91f0f21f-da5f-5875-938d-35f88bc3b019", 00:08:14.470 "is_configured": true, 00:08:14.470 "data_offset": 2048, 00:08:14.470 "data_size": 63488 00:08:14.470 }, 00:08:14.470 { 00:08:14.470 "name": "BaseBdev3", 00:08:14.470 "uuid": "0c2840d6-8168-5028-bed3-6083b51799e7", 00:08:14.470 "is_configured": true, 00:08:14.470 "data_offset": 2048, 00:08:14.470 "data_size": 63488 00:08:14.470 } 00:08:14.470 ] 00:08:14.470 }' 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.470 16:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.056 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:15.056 16:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:15.056 [2024-11-28 16:21:06.659982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:15.995 16:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:15.995 16:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.995 16:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.995 16:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.995 16:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:15.995 16:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:15.995 16:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:15.995 16:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:15.995 16:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.995 16:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.995 16:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.995 16:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.995 16:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.995 16:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.995 16:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.995 16:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.995 16:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.995 16:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.995 16:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.995 16:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.995 16:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.995 16:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.995 16:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.995 "name": "raid_bdev1", 00:08:15.995 "uuid": "80380cb3-79bb-4d8c-9aa1-696e11759b79", 00:08:15.995 "strip_size_kb": 64, 00:08:15.995 "state": "online", 00:08:15.995 "raid_level": "raid0", 00:08:15.995 "superblock": true, 00:08:15.995 "num_base_bdevs": 3, 00:08:15.995 "num_base_bdevs_discovered": 3, 00:08:15.995 "num_base_bdevs_operational": 3, 00:08:15.995 "base_bdevs_list": [ 00:08:15.995 { 00:08:15.995 "name": "BaseBdev1", 00:08:15.995 "uuid": "36464b2b-1413-5ba5-983b-1718db2ad4a8", 00:08:15.995 "is_configured": true, 00:08:15.995 "data_offset": 2048, 00:08:15.995 "data_size": 63488 00:08:15.995 }, 00:08:15.995 { 00:08:15.995 "name": "BaseBdev2", 00:08:15.995 "uuid": "91f0f21f-da5f-5875-938d-35f88bc3b019", 00:08:15.995 "is_configured": true, 00:08:15.995 "data_offset": 2048, 00:08:15.995 "data_size": 63488 00:08:15.995 }, 00:08:15.995 { 00:08:15.995 "name": "BaseBdev3", 00:08:15.995 "uuid": "0c2840d6-8168-5028-bed3-6083b51799e7", 00:08:15.995 "is_configured": true, 00:08:15.995 "data_offset": 2048, 00:08:15.995 "data_size": 63488 00:08:15.995 } 00:08:15.996 ] 00:08:15.996 }' 00:08:15.996 16:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.996 16:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.255 16:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:16.255 16:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.255 16:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.255 [2024-11-28 16:21:07.999331] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:16.255 [2024-11-28 16:21:07.999421] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.255 [2024-11-28 16:21:08.001901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.255 [2024-11-28 16:21:08.001983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.255 [2024-11-28 16:21:08.002049] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.255 [2024-11-28 16:21:08.002095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:16.255 { 00:08:16.255 "results": [ 00:08:16.255 { 00:08:16.255 "job": "raid_bdev1", 00:08:16.255 "core_mask": "0x1", 00:08:16.255 "workload": "randrw", 00:08:16.255 "percentage": 50, 00:08:16.255 "status": "finished", 00:08:16.255 "queue_depth": 1, 00:08:16.255 "io_size": 131072, 00:08:16.255 "runtime": 1.340248, 00:08:16.255 "iops": 17250.538706269286, 00:08:16.255 "mibps": 2156.317338283661, 00:08:16.255 "io_failed": 1, 00:08:16.255 "io_timeout": 0, 00:08:16.255 "avg_latency_us": 80.34744844334222, 00:08:16.255 "min_latency_us": 24.258515283842794, 00:08:16.255 "max_latency_us": 1416.6078602620087 00:08:16.255 } 00:08:16.255 ], 00:08:16.255 "core_count": 1 00:08:16.256 } 00:08:16.256 16:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.256 16:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76642 00:08:16.256 16:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 76642 ']' 00:08:16.256 16:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 76642 00:08:16.256 16:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:16.256 16:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:16.256 16:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76642 00:08:16.516 16:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:16.516 16:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:16.516 16:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76642' 00:08:16.516 killing process with pid 76642 00:08:16.516 16:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 76642 00:08:16.516 [2024-11-28 16:21:08.037084] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:16.516 16:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 76642 00:08:16.516 [2024-11-28 16:21:08.063098] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:16.776 16:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qdsSJeQQN2 00:08:16.776 16:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:16.776 16:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:16.776 16:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:16.776 16:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:16.776 ************************************ 00:08:16.776 END TEST raid_write_error_test 00:08:16.776 ************************************ 00:08:16.776 16:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:16.776 16:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:16.776 16:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:16.776 00:08:16.776 real 0m3.224s 00:08:16.776 user 0m4.011s 00:08:16.776 sys 0m0.536s 00:08:16.776 16:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.776 16:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.776 16:21:08 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:16.776 16:21:08 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:16.776 16:21:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:16.776 16:21:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.776 16:21:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:16.776 ************************************ 00:08:16.776 START TEST raid_state_function_test 00:08:16.776 ************************************ 00:08:16.776 16:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:08:16.776 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:16.776 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:16.776 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:16.776 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:16.776 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:16.776 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:16.776 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:16.776 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:16.776 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:16.777 Process raid pid: 76769 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76769 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76769' 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76769 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 76769 ']' 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.777 16:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.777 [2024-11-28 16:21:08.470689] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:16.777 [2024-11-28 16:21:08.470938] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.037 [2024-11-28 16:21:08.632740] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.037 [2024-11-28 16:21:08.676914] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.037 [2024-11-28 16:21:08.718743] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.037 [2024-11-28 16:21:08.718870] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.609 [2024-11-28 16:21:09.311885] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:17.609 [2024-11-28 16:21:09.311981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:17.609 [2024-11-28 16:21:09.312013] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:17.609 [2024-11-28 16:21:09.312026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:17.609 [2024-11-28 16:21:09.312032] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:17.609 [2024-11-28 16:21:09.312047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.609 "name": "Existed_Raid", 00:08:17.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.609 "strip_size_kb": 64, 00:08:17.609 "state": "configuring", 00:08:17.609 "raid_level": "concat", 00:08:17.609 "superblock": false, 00:08:17.609 "num_base_bdevs": 3, 00:08:17.609 "num_base_bdevs_discovered": 0, 00:08:17.609 "num_base_bdevs_operational": 3, 00:08:17.609 "base_bdevs_list": [ 00:08:17.609 { 00:08:17.609 "name": "BaseBdev1", 00:08:17.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.609 "is_configured": false, 00:08:17.609 "data_offset": 0, 00:08:17.609 "data_size": 0 00:08:17.609 }, 00:08:17.609 { 00:08:17.609 "name": "BaseBdev2", 00:08:17.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.609 "is_configured": false, 00:08:17.609 "data_offset": 0, 00:08:17.609 "data_size": 0 00:08:17.609 }, 00:08:17.609 { 00:08:17.609 "name": "BaseBdev3", 00:08:17.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.609 "is_configured": false, 00:08:17.609 "data_offset": 0, 00:08:17.609 "data_size": 0 00:08:17.609 } 00:08:17.609 ] 00:08:17.609 }' 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.609 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.178 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:18.178 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.178 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.178 [2024-11-28 16:21:09.755057] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:18.178 [2024-11-28 16:21:09.755147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:18.178 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.178 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:18.178 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.178 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.178 [2024-11-28 16:21:09.767062] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:18.178 [2024-11-28 16:21:09.767137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:18.178 [2024-11-28 16:21:09.767164] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:18.178 [2024-11-28 16:21:09.767186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:18.178 [2024-11-28 16:21:09.767205] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:18.178 [2024-11-28 16:21:09.767225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.179 [2024-11-28 16:21:09.787783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.179 BaseBdev1 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.179 [ 00:08:18.179 { 00:08:18.179 "name": "BaseBdev1", 00:08:18.179 "aliases": [ 00:08:18.179 "5f9e9cc0-d369-42c1-87e8-7af50e639100" 00:08:18.179 ], 00:08:18.179 "product_name": "Malloc disk", 00:08:18.179 "block_size": 512, 00:08:18.179 "num_blocks": 65536, 00:08:18.179 "uuid": "5f9e9cc0-d369-42c1-87e8-7af50e639100", 00:08:18.179 "assigned_rate_limits": { 00:08:18.179 "rw_ios_per_sec": 0, 00:08:18.179 "rw_mbytes_per_sec": 0, 00:08:18.179 "r_mbytes_per_sec": 0, 00:08:18.179 "w_mbytes_per_sec": 0 00:08:18.179 }, 00:08:18.179 "claimed": true, 00:08:18.179 "claim_type": "exclusive_write", 00:08:18.179 "zoned": false, 00:08:18.179 "supported_io_types": { 00:08:18.179 "read": true, 00:08:18.179 "write": true, 00:08:18.179 "unmap": true, 00:08:18.179 "flush": true, 00:08:18.179 "reset": true, 00:08:18.179 "nvme_admin": false, 00:08:18.179 "nvme_io": false, 00:08:18.179 "nvme_io_md": false, 00:08:18.179 "write_zeroes": true, 00:08:18.179 "zcopy": true, 00:08:18.179 "get_zone_info": false, 00:08:18.179 "zone_management": false, 00:08:18.179 "zone_append": false, 00:08:18.179 "compare": false, 00:08:18.179 "compare_and_write": false, 00:08:18.179 "abort": true, 00:08:18.179 "seek_hole": false, 00:08:18.179 "seek_data": false, 00:08:18.179 "copy": true, 00:08:18.179 "nvme_iov_md": false 00:08:18.179 }, 00:08:18.179 "memory_domains": [ 00:08:18.179 { 00:08:18.179 "dma_device_id": "system", 00:08:18.179 "dma_device_type": 1 00:08:18.179 }, 00:08:18.179 { 00:08:18.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.179 "dma_device_type": 2 00:08:18.179 } 00:08:18.179 ], 00:08:18.179 "driver_specific": {} 00:08:18.179 } 00:08:18.179 ] 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.179 "name": "Existed_Raid", 00:08:18.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.179 "strip_size_kb": 64, 00:08:18.179 "state": "configuring", 00:08:18.179 "raid_level": "concat", 00:08:18.179 "superblock": false, 00:08:18.179 "num_base_bdevs": 3, 00:08:18.179 "num_base_bdevs_discovered": 1, 00:08:18.179 "num_base_bdevs_operational": 3, 00:08:18.179 "base_bdevs_list": [ 00:08:18.179 { 00:08:18.179 "name": "BaseBdev1", 00:08:18.179 "uuid": "5f9e9cc0-d369-42c1-87e8-7af50e639100", 00:08:18.179 "is_configured": true, 00:08:18.179 "data_offset": 0, 00:08:18.179 "data_size": 65536 00:08:18.179 }, 00:08:18.179 { 00:08:18.179 "name": "BaseBdev2", 00:08:18.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.179 "is_configured": false, 00:08:18.179 "data_offset": 0, 00:08:18.179 "data_size": 0 00:08:18.179 }, 00:08:18.179 { 00:08:18.179 "name": "BaseBdev3", 00:08:18.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.179 "is_configured": false, 00:08:18.179 "data_offset": 0, 00:08:18.179 "data_size": 0 00:08:18.179 } 00:08:18.179 ] 00:08:18.179 }' 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.179 16:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.746 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:18.746 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.746 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.746 [2024-11-28 16:21:10.239029] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:18.746 [2024-11-28 16:21:10.239085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:18.746 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.746 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:18.746 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.746 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.746 [2024-11-28 16:21:10.251043] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.746 [2024-11-28 16:21:10.252914] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:18.746 [2024-11-28 16:21:10.252987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:18.746 [2024-11-28 16:21:10.253014] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:18.746 [2024-11-28 16:21:10.253038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:18.746 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.746 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:18.746 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:18.747 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:18.747 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.747 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.747 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.747 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.747 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.747 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.747 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.747 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.747 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.747 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.747 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.747 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.747 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.747 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.747 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.747 "name": "Existed_Raid", 00:08:18.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.747 "strip_size_kb": 64, 00:08:18.747 "state": "configuring", 00:08:18.747 "raid_level": "concat", 00:08:18.747 "superblock": false, 00:08:18.747 "num_base_bdevs": 3, 00:08:18.747 "num_base_bdevs_discovered": 1, 00:08:18.747 "num_base_bdevs_operational": 3, 00:08:18.747 "base_bdevs_list": [ 00:08:18.747 { 00:08:18.747 "name": "BaseBdev1", 00:08:18.747 "uuid": "5f9e9cc0-d369-42c1-87e8-7af50e639100", 00:08:18.747 "is_configured": true, 00:08:18.747 "data_offset": 0, 00:08:18.747 "data_size": 65536 00:08:18.747 }, 00:08:18.747 { 00:08:18.747 "name": "BaseBdev2", 00:08:18.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.747 "is_configured": false, 00:08:18.747 "data_offset": 0, 00:08:18.747 "data_size": 0 00:08:18.747 }, 00:08:18.747 { 00:08:18.747 "name": "BaseBdev3", 00:08:18.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.747 "is_configured": false, 00:08:18.747 "data_offset": 0, 00:08:18.747 "data_size": 0 00:08:18.747 } 00:08:18.747 ] 00:08:18.747 }' 00:08:18.747 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.747 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.005 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:19.005 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.005 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.006 [2024-11-28 16:21:10.693134] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:19.006 BaseBdev2 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.006 [ 00:08:19.006 { 00:08:19.006 "name": "BaseBdev2", 00:08:19.006 "aliases": [ 00:08:19.006 "10b16327-ac60-4b24-9ac2-cf7b78f344e9" 00:08:19.006 ], 00:08:19.006 "product_name": "Malloc disk", 00:08:19.006 "block_size": 512, 00:08:19.006 "num_blocks": 65536, 00:08:19.006 "uuid": "10b16327-ac60-4b24-9ac2-cf7b78f344e9", 00:08:19.006 "assigned_rate_limits": { 00:08:19.006 "rw_ios_per_sec": 0, 00:08:19.006 "rw_mbytes_per_sec": 0, 00:08:19.006 "r_mbytes_per_sec": 0, 00:08:19.006 "w_mbytes_per_sec": 0 00:08:19.006 }, 00:08:19.006 "claimed": true, 00:08:19.006 "claim_type": "exclusive_write", 00:08:19.006 "zoned": false, 00:08:19.006 "supported_io_types": { 00:08:19.006 "read": true, 00:08:19.006 "write": true, 00:08:19.006 "unmap": true, 00:08:19.006 "flush": true, 00:08:19.006 "reset": true, 00:08:19.006 "nvme_admin": false, 00:08:19.006 "nvme_io": false, 00:08:19.006 "nvme_io_md": false, 00:08:19.006 "write_zeroes": true, 00:08:19.006 "zcopy": true, 00:08:19.006 "get_zone_info": false, 00:08:19.006 "zone_management": false, 00:08:19.006 "zone_append": false, 00:08:19.006 "compare": false, 00:08:19.006 "compare_and_write": false, 00:08:19.006 "abort": true, 00:08:19.006 "seek_hole": false, 00:08:19.006 "seek_data": false, 00:08:19.006 "copy": true, 00:08:19.006 "nvme_iov_md": false 00:08:19.006 }, 00:08:19.006 "memory_domains": [ 00:08:19.006 { 00:08:19.006 "dma_device_id": "system", 00:08:19.006 "dma_device_type": 1 00:08:19.006 }, 00:08:19.006 { 00:08:19.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.006 "dma_device_type": 2 00:08:19.006 } 00:08:19.006 ], 00:08:19.006 "driver_specific": {} 00:08:19.006 } 00:08:19.006 ] 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.006 "name": "Existed_Raid", 00:08:19.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.006 "strip_size_kb": 64, 00:08:19.006 "state": "configuring", 00:08:19.006 "raid_level": "concat", 00:08:19.006 "superblock": false, 00:08:19.006 "num_base_bdevs": 3, 00:08:19.006 "num_base_bdevs_discovered": 2, 00:08:19.006 "num_base_bdevs_operational": 3, 00:08:19.006 "base_bdevs_list": [ 00:08:19.006 { 00:08:19.006 "name": "BaseBdev1", 00:08:19.006 "uuid": "5f9e9cc0-d369-42c1-87e8-7af50e639100", 00:08:19.006 "is_configured": true, 00:08:19.006 "data_offset": 0, 00:08:19.006 "data_size": 65536 00:08:19.006 }, 00:08:19.006 { 00:08:19.006 "name": "BaseBdev2", 00:08:19.006 "uuid": "10b16327-ac60-4b24-9ac2-cf7b78f344e9", 00:08:19.006 "is_configured": true, 00:08:19.006 "data_offset": 0, 00:08:19.006 "data_size": 65536 00:08:19.006 }, 00:08:19.006 { 00:08:19.006 "name": "BaseBdev3", 00:08:19.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.006 "is_configured": false, 00:08:19.006 "data_offset": 0, 00:08:19.006 "data_size": 0 00:08:19.006 } 00:08:19.006 ] 00:08:19.006 }' 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.006 16:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.576 [2024-11-28 16:21:11.163260] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:19.576 [2024-11-28 16:21:11.163365] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:19.576 [2024-11-28 16:21:11.163380] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:19.576 [2024-11-28 16:21:11.163673] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:19.576 [2024-11-28 16:21:11.163815] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:19.576 [2024-11-28 16:21:11.163826] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:19.576 [2024-11-28 16:21:11.164051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.576 BaseBdev3 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.576 [ 00:08:19.576 { 00:08:19.576 "name": "BaseBdev3", 00:08:19.576 "aliases": [ 00:08:19.576 "ce9023d7-4d32-4add-a4e2-4e349a5e3945" 00:08:19.576 ], 00:08:19.576 "product_name": "Malloc disk", 00:08:19.576 "block_size": 512, 00:08:19.576 "num_blocks": 65536, 00:08:19.576 "uuid": "ce9023d7-4d32-4add-a4e2-4e349a5e3945", 00:08:19.576 "assigned_rate_limits": { 00:08:19.576 "rw_ios_per_sec": 0, 00:08:19.576 "rw_mbytes_per_sec": 0, 00:08:19.576 "r_mbytes_per_sec": 0, 00:08:19.576 "w_mbytes_per_sec": 0 00:08:19.576 }, 00:08:19.576 "claimed": true, 00:08:19.576 "claim_type": "exclusive_write", 00:08:19.576 "zoned": false, 00:08:19.576 "supported_io_types": { 00:08:19.576 "read": true, 00:08:19.576 "write": true, 00:08:19.576 "unmap": true, 00:08:19.576 "flush": true, 00:08:19.576 "reset": true, 00:08:19.576 "nvme_admin": false, 00:08:19.576 "nvme_io": false, 00:08:19.576 "nvme_io_md": false, 00:08:19.576 "write_zeroes": true, 00:08:19.576 "zcopy": true, 00:08:19.576 "get_zone_info": false, 00:08:19.576 "zone_management": false, 00:08:19.576 "zone_append": false, 00:08:19.576 "compare": false, 00:08:19.576 "compare_and_write": false, 00:08:19.576 "abort": true, 00:08:19.576 "seek_hole": false, 00:08:19.576 "seek_data": false, 00:08:19.576 "copy": true, 00:08:19.576 "nvme_iov_md": false 00:08:19.576 }, 00:08:19.576 "memory_domains": [ 00:08:19.576 { 00:08:19.576 "dma_device_id": "system", 00:08:19.576 "dma_device_type": 1 00:08:19.576 }, 00:08:19.576 { 00:08:19.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.576 "dma_device_type": 2 00:08:19.576 } 00:08:19.576 ], 00:08:19.576 "driver_specific": {} 00:08:19.576 } 00:08:19.576 ] 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.576 "name": "Existed_Raid", 00:08:19.576 "uuid": "eb7320f5-047f-4f3a-8322-2069eb75d210", 00:08:19.576 "strip_size_kb": 64, 00:08:19.576 "state": "online", 00:08:19.576 "raid_level": "concat", 00:08:19.576 "superblock": false, 00:08:19.576 "num_base_bdevs": 3, 00:08:19.576 "num_base_bdevs_discovered": 3, 00:08:19.576 "num_base_bdevs_operational": 3, 00:08:19.576 "base_bdevs_list": [ 00:08:19.576 { 00:08:19.576 "name": "BaseBdev1", 00:08:19.576 "uuid": "5f9e9cc0-d369-42c1-87e8-7af50e639100", 00:08:19.576 "is_configured": true, 00:08:19.576 "data_offset": 0, 00:08:19.576 "data_size": 65536 00:08:19.576 }, 00:08:19.576 { 00:08:19.576 "name": "BaseBdev2", 00:08:19.576 "uuid": "10b16327-ac60-4b24-9ac2-cf7b78f344e9", 00:08:19.576 "is_configured": true, 00:08:19.576 "data_offset": 0, 00:08:19.576 "data_size": 65536 00:08:19.576 }, 00:08:19.576 { 00:08:19.576 "name": "BaseBdev3", 00:08:19.576 "uuid": "ce9023d7-4d32-4add-a4e2-4e349a5e3945", 00:08:19.576 "is_configured": true, 00:08:19.576 "data_offset": 0, 00:08:19.576 "data_size": 65536 00:08:19.576 } 00:08:19.576 ] 00:08:19.576 }' 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.576 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.834 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.093 [2024-11-28 16:21:11.618790] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:20.093 "name": "Existed_Raid", 00:08:20.093 "aliases": [ 00:08:20.093 "eb7320f5-047f-4f3a-8322-2069eb75d210" 00:08:20.093 ], 00:08:20.093 "product_name": "Raid Volume", 00:08:20.093 "block_size": 512, 00:08:20.093 "num_blocks": 196608, 00:08:20.093 "uuid": "eb7320f5-047f-4f3a-8322-2069eb75d210", 00:08:20.093 "assigned_rate_limits": { 00:08:20.093 "rw_ios_per_sec": 0, 00:08:20.093 "rw_mbytes_per_sec": 0, 00:08:20.093 "r_mbytes_per_sec": 0, 00:08:20.093 "w_mbytes_per_sec": 0 00:08:20.093 }, 00:08:20.093 "claimed": false, 00:08:20.093 "zoned": false, 00:08:20.093 "supported_io_types": { 00:08:20.093 "read": true, 00:08:20.093 "write": true, 00:08:20.093 "unmap": true, 00:08:20.093 "flush": true, 00:08:20.093 "reset": true, 00:08:20.093 "nvme_admin": false, 00:08:20.093 "nvme_io": false, 00:08:20.093 "nvme_io_md": false, 00:08:20.093 "write_zeroes": true, 00:08:20.093 "zcopy": false, 00:08:20.093 "get_zone_info": false, 00:08:20.093 "zone_management": false, 00:08:20.093 "zone_append": false, 00:08:20.093 "compare": false, 00:08:20.093 "compare_and_write": false, 00:08:20.093 "abort": false, 00:08:20.093 "seek_hole": false, 00:08:20.093 "seek_data": false, 00:08:20.093 "copy": false, 00:08:20.093 "nvme_iov_md": false 00:08:20.093 }, 00:08:20.093 "memory_domains": [ 00:08:20.093 { 00:08:20.093 "dma_device_id": "system", 00:08:20.093 "dma_device_type": 1 00:08:20.093 }, 00:08:20.093 { 00:08:20.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.093 "dma_device_type": 2 00:08:20.093 }, 00:08:20.093 { 00:08:20.093 "dma_device_id": "system", 00:08:20.093 "dma_device_type": 1 00:08:20.093 }, 00:08:20.093 { 00:08:20.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.093 "dma_device_type": 2 00:08:20.093 }, 00:08:20.093 { 00:08:20.093 "dma_device_id": "system", 00:08:20.093 "dma_device_type": 1 00:08:20.093 }, 00:08:20.093 { 00:08:20.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.093 "dma_device_type": 2 00:08:20.093 } 00:08:20.093 ], 00:08:20.093 "driver_specific": { 00:08:20.093 "raid": { 00:08:20.093 "uuid": "eb7320f5-047f-4f3a-8322-2069eb75d210", 00:08:20.093 "strip_size_kb": 64, 00:08:20.093 "state": "online", 00:08:20.093 "raid_level": "concat", 00:08:20.093 "superblock": false, 00:08:20.093 "num_base_bdevs": 3, 00:08:20.093 "num_base_bdevs_discovered": 3, 00:08:20.093 "num_base_bdevs_operational": 3, 00:08:20.093 "base_bdevs_list": [ 00:08:20.093 { 00:08:20.093 "name": "BaseBdev1", 00:08:20.093 "uuid": "5f9e9cc0-d369-42c1-87e8-7af50e639100", 00:08:20.093 "is_configured": true, 00:08:20.093 "data_offset": 0, 00:08:20.093 "data_size": 65536 00:08:20.093 }, 00:08:20.093 { 00:08:20.093 "name": "BaseBdev2", 00:08:20.093 "uuid": "10b16327-ac60-4b24-9ac2-cf7b78f344e9", 00:08:20.093 "is_configured": true, 00:08:20.093 "data_offset": 0, 00:08:20.093 "data_size": 65536 00:08:20.093 }, 00:08:20.093 { 00:08:20.093 "name": "BaseBdev3", 00:08:20.093 "uuid": "ce9023d7-4d32-4add-a4e2-4e349a5e3945", 00:08:20.093 "is_configured": true, 00:08:20.093 "data_offset": 0, 00:08:20.093 "data_size": 65536 00:08:20.093 } 00:08:20.093 ] 00:08:20.093 } 00:08:20.093 } 00:08:20.093 }' 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:20.093 BaseBdev2 00:08:20.093 BaseBdev3' 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.093 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.353 [2024-11-28 16:21:11.874115] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:20.353 [2024-11-28 16:21:11.874184] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:20.353 [2024-11-28 16:21:11.874258] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.353 "name": "Existed_Raid", 00:08:20.353 "uuid": "eb7320f5-047f-4f3a-8322-2069eb75d210", 00:08:20.353 "strip_size_kb": 64, 00:08:20.353 "state": "offline", 00:08:20.353 "raid_level": "concat", 00:08:20.353 "superblock": false, 00:08:20.353 "num_base_bdevs": 3, 00:08:20.353 "num_base_bdevs_discovered": 2, 00:08:20.353 "num_base_bdevs_operational": 2, 00:08:20.353 "base_bdevs_list": [ 00:08:20.353 { 00:08:20.353 "name": null, 00:08:20.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.353 "is_configured": false, 00:08:20.353 "data_offset": 0, 00:08:20.353 "data_size": 65536 00:08:20.353 }, 00:08:20.353 { 00:08:20.353 "name": "BaseBdev2", 00:08:20.353 "uuid": "10b16327-ac60-4b24-9ac2-cf7b78f344e9", 00:08:20.353 "is_configured": true, 00:08:20.353 "data_offset": 0, 00:08:20.353 "data_size": 65536 00:08:20.353 }, 00:08:20.353 { 00:08:20.353 "name": "BaseBdev3", 00:08:20.353 "uuid": "ce9023d7-4d32-4add-a4e2-4e349a5e3945", 00:08:20.353 "is_configured": true, 00:08:20.353 "data_offset": 0, 00:08:20.353 "data_size": 65536 00:08:20.353 } 00:08:20.353 ] 00:08:20.353 }' 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.353 16:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.612 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:20.613 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:20.613 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.613 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.613 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.613 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:20.872 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.872 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:20.872 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:20.872 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:20.872 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.872 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.872 [2024-11-28 16:21:12.432449] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:20.872 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.872 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:20.872 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:20.872 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.872 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.872 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.872 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:20.872 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.872 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:20.872 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:20.872 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:20.872 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.872 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.872 [2024-11-28 16:21:12.503469] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:20.872 [2024-11-28 16:21:12.503523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:20.872 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.873 BaseBdev2 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.873 [ 00:08:20.873 { 00:08:20.873 "name": "BaseBdev2", 00:08:20.873 "aliases": [ 00:08:20.873 "bc3c309f-4f4e-47f5-80a4-ef2453704b81" 00:08:20.873 ], 00:08:20.873 "product_name": "Malloc disk", 00:08:20.873 "block_size": 512, 00:08:20.873 "num_blocks": 65536, 00:08:20.873 "uuid": "bc3c309f-4f4e-47f5-80a4-ef2453704b81", 00:08:20.873 "assigned_rate_limits": { 00:08:20.873 "rw_ios_per_sec": 0, 00:08:20.873 "rw_mbytes_per_sec": 0, 00:08:20.873 "r_mbytes_per_sec": 0, 00:08:20.873 "w_mbytes_per_sec": 0 00:08:20.873 }, 00:08:20.873 "claimed": false, 00:08:20.873 "zoned": false, 00:08:20.873 "supported_io_types": { 00:08:20.873 "read": true, 00:08:20.873 "write": true, 00:08:20.873 "unmap": true, 00:08:20.873 "flush": true, 00:08:20.873 "reset": true, 00:08:20.873 "nvme_admin": false, 00:08:20.873 "nvme_io": false, 00:08:20.873 "nvme_io_md": false, 00:08:20.873 "write_zeroes": true, 00:08:20.873 "zcopy": true, 00:08:20.873 "get_zone_info": false, 00:08:20.873 "zone_management": false, 00:08:20.873 "zone_append": false, 00:08:20.873 "compare": false, 00:08:20.873 "compare_and_write": false, 00:08:20.873 "abort": true, 00:08:20.873 "seek_hole": false, 00:08:20.873 "seek_data": false, 00:08:20.873 "copy": true, 00:08:20.873 "nvme_iov_md": false 00:08:20.873 }, 00:08:20.873 "memory_domains": [ 00:08:20.873 { 00:08:20.873 "dma_device_id": "system", 00:08:20.873 "dma_device_type": 1 00:08:20.873 }, 00:08:20.873 { 00:08:20.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.873 "dma_device_type": 2 00:08:20.873 } 00:08:20.873 ], 00:08:20.873 "driver_specific": {} 00:08:20.873 } 00:08:20.873 ] 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.873 BaseBdev3 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.873 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.133 [ 00:08:21.133 { 00:08:21.133 "name": "BaseBdev3", 00:08:21.133 "aliases": [ 00:08:21.133 "1a9d1235-b9fc-4f60-96f5-82722a0b6e66" 00:08:21.133 ], 00:08:21.133 "product_name": "Malloc disk", 00:08:21.133 "block_size": 512, 00:08:21.133 "num_blocks": 65536, 00:08:21.133 "uuid": "1a9d1235-b9fc-4f60-96f5-82722a0b6e66", 00:08:21.133 "assigned_rate_limits": { 00:08:21.133 "rw_ios_per_sec": 0, 00:08:21.133 "rw_mbytes_per_sec": 0, 00:08:21.133 "r_mbytes_per_sec": 0, 00:08:21.133 "w_mbytes_per_sec": 0 00:08:21.133 }, 00:08:21.133 "claimed": false, 00:08:21.133 "zoned": false, 00:08:21.133 "supported_io_types": { 00:08:21.133 "read": true, 00:08:21.133 "write": true, 00:08:21.133 "unmap": true, 00:08:21.133 "flush": true, 00:08:21.133 "reset": true, 00:08:21.133 "nvme_admin": false, 00:08:21.133 "nvme_io": false, 00:08:21.133 "nvme_io_md": false, 00:08:21.133 "write_zeroes": true, 00:08:21.133 "zcopy": true, 00:08:21.133 "get_zone_info": false, 00:08:21.133 "zone_management": false, 00:08:21.133 "zone_append": false, 00:08:21.133 "compare": false, 00:08:21.133 "compare_and_write": false, 00:08:21.133 "abort": true, 00:08:21.133 "seek_hole": false, 00:08:21.133 "seek_data": false, 00:08:21.133 "copy": true, 00:08:21.133 "nvme_iov_md": false 00:08:21.133 }, 00:08:21.133 "memory_domains": [ 00:08:21.133 { 00:08:21.133 "dma_device_id": "system", 00:08:21.133 "dma_device_type": 1 00:08:21.133 }, 00:08:21.133 { 00:08:21.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.133 "dma_device_type": 2 00:08:21.133 } 00:08:21.133 ], 00:08:21.133 "driver_specific": {} 00:08:21.133 } 00:08:21.133 ] 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.133 [2024-11-28 16:21:12.679645] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:21.133 [2024-11-28 16:21:12.679750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:21.133 [2024-11-28 16:21:12.679776] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:21.133 [2024-11-28 16:21:12.681579] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.133 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.134 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.134 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.134 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.134 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.134 "name": "Existed_Raid", 00:08:21.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.134 "strip_size_kb": 64, 00:08:21.134 "state": "configuring", 00:08:21.134 "raid_level": "concat", 00:08:21.134 "superblock": false, 00:08:21.134 "num_base_bdevs": 3, 00:08:21.134 "num_base_bdevs_discovered": 2, 00:08:21.134 "num_base_bdevs_operational": 3, 00:08:21.134 "base_bdevs_list": [ 00:08:21.134 { 00:08:21.134 "name": "BaseBdev1", 00:08:21.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.134 "is_configured": false, 00:08:21.134 "data_offset": 0, 00:08:21.134 "data_size": 0 00:08:21.134 }, 00:08:21.134 { 00:08:21.134 "name": "BaseBdev2", 00:08:21.134 "uuid": "bc3c309f-4f4e-47f5-80a4-ef2453704b81", 00:08:21.134 "is_configured": true, 00:08:21.134 "data_offset": 0, 00:08:21.134 "data_size": 65536 00:08:21.134 }, 00:08:21.134 { 00:08:21.134 "name": "BaseBdev3", 00:08:21.134 "uuid": "1a9d1235-b9fc-4f60-96f5-82722a0b6e66", 00:08:21.134 "is_configured": true, 00:08:21.134 "data_offset": 0, 00:08:21.134 "data_size": 65536 00:08:21.134 } 00:08:21.134 ] 00:08:21.134 }' 00:08:21.134 16:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.134 16:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.393 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:21.393 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.393 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.393 [2024-11-28 16:21:13.082985] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:21.393 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.393 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:21.393 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.394 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.394 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.394 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.394 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.394 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.394 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.394 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.394 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.394 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.394 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.394 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.394 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.394 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.394 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.394 "name": "Existed_Raid", 00:08:21.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.394 "strip_size_kb": 64, 00:08:21.394 "state": "configuring", 00:08:21.394 "raid_level": "concat", 00:08:21.394 "superblock": false, 00:08:21.394 "num_base_bdevs": 3, 00:08:21.394 "num_base_bdevs_discovered": 1, 00:08:21.394 "num_base_bdevs_operational": 3, 00:08:21.394 "base_bdevs_list": [ 00:08:21.394 { 00:08:21.394 "name": "BaseBdev1", 00:08:21.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.394 "is_configured": false, 00:08:21.394 "data_offset": 0, 00:08:21.394 "data_size": 0 00:08:21.394 }, 00:08:21.394 { 00:08:21.394 "name": null, 00:08:21.394 "uuid": "bc3c309f-4f4e-47f5-80a4-ef2453704b81", 00:08:21.394 "is_configured": false, 00:08:21.394 "data_offset": 0, 00:08:21.394 "data_size": 65536 00:08:21.394 }, 00:08:21.394 { 00:08:21.394 "name": "BaseBdev3", 00:08:21.394 "uuid": "1a9d1235-b9fc-4f60-96f5-82722a0b6e66", 00:08:21.394 "is_configured": true, 00:08:21.394 "data_offset": 0, 00:08:21.394 "data_size": 65536 00:08:21.394 } 00:08:21.394 ] 00:08:21.394 }' 00:08:21.394 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.394 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.962 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:21.962 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.962 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.962 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.962 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.962 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:21.962 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:21.962 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.962 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.962 [2024-11-28 16:21:13.601054] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:21.962 BaseBdev1 00:08:21.962 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.962 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:21.962 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:21.962 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:21.962 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:21.962 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:21.962 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:21.962 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.963 [ 00:08:21.963 { 00:08:21.963 "name": "BaseBdev1", 00:08:21.963 "aliases": [ 00:08:21.963 "375506cc-6b3b-4f6c-a3aa-5c228f4c3f2d" 00:08:21.963 ], 00:08:21.963 "product_name": "Malloc disk", 00:08:21.963 "block_size": 512, 00:08:21.963 "num_blocks": 65536, 00:08:21.963 "uuid": "375506cc-6b3b-4f6c-a3aa-5c228f4c3f2d", 00:08:21.963 "assigned_rate_limits": { 00:08:21.963 "rw_ios_per_sec": 0, 00:08:21.963 "rw_mbytes_per_sec": 0, 00:08:21.963 "r_mbytes_per_sec": 0, 00:08:21.963 "w_mbytes_per_sec": 0 00:08:21.963 }, 00:08:21.963 "claimed": true, 00:08:21.963 "claim_type": "exclusive_write", 00:08:21.963 "zoned": false, 00:08:21.963 "supported_io_types": { 00:08:21.963 "read": true, 00:08:21.963 "write": true, 00:08:21.963 "unmap": true, 00:08:21.963 "flush": true, 00:08:21.963 "reset": true, 00:08:21.963 "nvme_admin": false, 00:08:21.963 "nvme_io": false, 00:08:21.963 "nvme_io_md": false, 00:08:21.963 "write_zeroes": true, 00:08:21.963 "zcopy": true, 00:08:21.963 "get_zone_info": false, 00:08:21.963 "zone_management": false, 00:08:21.963 "zone_append": false, 00:08:21.963 "compare": false, 00:08:21.963 "compare_and_write": false, 00:08:21.963 "abort": true, 00:08:21.963 "seek_hole": false, 00:08:21.963 "seek_data": false, 00:08:21.963 "copy": true, 00:08:21.963 "nvme_iov_md": false 00:08:21.963 }, 00:08:21.963 "memory_domains": [ 00:08:21.963 { 00:08:21.963 "dma_device_id": "system", 00:08:21.963 "dma_device_type": 1 00:08:21.963 }, 00:08:21.963 { 00:08:21.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.963 "dma_device_type": 2 00:08:21.963 } 00:08:21.963 ], 00:08:21.963 "driver_specific": {} 00:08:21.963 } 00:08:21.963 ] 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.963 "name": "Existed_Raid", 00:08:21.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.963 "strip_size_kb": 64, 00:08:21.963 "state": "configuring", 00:08:21.963 "raid_level": "concat", 00:08:21.963 "superblock": false, 00:08:21.963 "num_base_bdevs": 3, 00:08:21.963 "num_base_bdevs_discovered": 2, 00:08:21.963 "num_base_bdevs_operational": 3, 00:08:21.963 "base_bdevs_list": [ 00:08:21.963 { 00:08:21.963 "name": "BaseBdev1", 00:08:21.963 "uuid": "375506cc-6b3b-4f6c-a3aa-5c228f4c3f2d", 00:08:21.963 "is_configured": true, 00:08:21.963 "data_offset": 0, 00:08:21.963 "data_size": 65536 00:08:21.963 }, 00:08:21.963 { 00:08:21.963 "name": null, 00:08:21.963 "uuid": "bc3c309f-4f4e-47f5-80a4-ef2453704b81", 00:08:21.963 "is_configured": false, 00:08:21.963 "data_offset": 0, 00:08:21.963 "data_size": 65536 00:08:21.963 }, 00:08:21.963 { 00:08:21.963 "name": "BaseBdev3", 00:08:21.963 "uuid": "1a9d1235-b9fc-4f60-96f5-82722a0b6e66", 00:08:21.963 "is_configured": true, 00:08:21.963 "data_offset": 0, 00:08:21.963 "data_size": 65536 00:08:21.963 } 00:08:21.963 ] 00:08:21.963 }' 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.963 16:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.531 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:22.531 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.531 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.531 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.531 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.531 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:22.531 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:22.531 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.531 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.531 [2024-11-28 16:21:14.084259] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:22.531 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.532 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:22.532 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.532 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.532 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.532 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.532 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.532 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.532 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.532 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.532 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.532 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.532 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.532 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.532 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.532 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.532 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.532 "name": "Existed_Raid", 00:08:22.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.532 "strip_size_kb": 64, 00:08:22.532 "state": "configuring", 00:08:22.532 "raid_level": "concat", 00:08:22.532 "superblock": false, 00:08:22.532 "num_base_bdevs": 3, 00:08:22.532 "num_base_bdevs_discovered": 1, 00:08:22.532 "num_base_bdevs_operational": 3, 00:08:22.532 "base_bdevs_list": [ 00:08:22.532 { 00:08:22.532 "name": "BaseBdev1", 00:08:22.532 "uuid": "375506cc-6b3b-4f6c-a3aa-5c228f4c3f2d", 00:08:22.532 "is_configured": true, 00:08:22.532 "data_offset": 0, 00:08:22.532 "data_size": 65536 00:08:22.532 }, 00:08:22.532 { 00:08:22.532 "name": null, 00:08:22.532 "uuid": "bc3c309f-4f4e-47f5-80a4-ef2453704b81", 00:08:22.532 "is_configured": false, 00:08:22.532 "data_offset": 0, 00:08:22.532 "data_size": 65536 00:08:22.532 }, 00:08:22.532 { 00:08:22.532 "name": null, 00:08:22.532 "uuid": "1a9d1235-b9fc-4f60-96f5-82722a0b6e66", 00:08:22.532 "is_configured": false, 00:08:22.532 "data_offset": 0, 00:08:22.532 "data_size": 65536 00:08:22.532 } 00:08:22.532 ] 00:08:22.532 }' 00:08:22.532 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.532 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.791 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:22.791 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.791 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.791 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.051 [2024-11-28 16:21:14.599452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.051 "name": "Existed_Raid", 00:08:23.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.051 "strip_size_kb": 64, 00:08:23.051 "state": "configuring", 00:08:23.051 "raid_level": "concat", 00:08:23.051 "superblock": false, 00:08:23.051 "num_base_bdevs": 3, 00:08:23.051 "num_base_bdevs_discovered": 2, 00:08:23.051 "num_base_bdevs_operational": 3, 00:08:23.051 "base_bdevs_list": [ 00:08:23.051 { 00:08:23.051 "name": "BaseBdev1", 00:08:23.051 "uuid": "375506cc-6b3b-4f6c-a3aa-5c228f4c3f2d", 00:08:23.051 "is_configured": true, 00:08:23.051 "data_offset": 0, 00:08:23.051 "data_size": 65536 00:08:23.051 }, 00:08:23.051 { 00:08:23.051 "name": null, 00:08:23.051 "uuid": "bc3c309f-4f4e-47f5-80a4-ef2453704b81", 00:08:23.051 "is_configured": false, 00:08:23.051 "data_offset": 0, 00:08:23.051 "data_size": 65536 00:08:23.051 }, 00:08:23.051 { 00:08:23.051 "name": "BaseBdev3", 00:08:23.051 "uuid": "1a9d1235-b9fc-4f60-96f5-82722a0b6e66", 00:08:23.051 "is_configured": true, 00:08:23.051 "data_offset": 0, 00:08:23.051 "data_size": 65536 00:08:23.051 } 00:08:23.051 ] 00:08:23.051 }' 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.051 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.310 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.310 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.310 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.310 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:23.310 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.310 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:23.310 16:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:23.310 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.310 16:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.310 [2024-11-28 16:21:14.994813] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:23.310 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.310 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:23.310 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.310 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.310 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.310 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.310 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.310 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.310 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.310 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.310 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.310 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.310 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.310 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.310 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.310 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.310 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.310 "name": "Existed_Raid", 00:08:23.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.310 "strip_size_kb": 64, 00:08:23.310 "state": "configuring", 00:08:23.310 "raid_level": "concat", 00:08:23.310 "superblock": false, 00:08:23.310 "num_base_bdevs": 3, 00:08:23.310 "num_base_bdevs_discovered": 1, 00:08:23.310 "num_base_bdevs_operational": 3, 00:08:23.310 "base_bdevs_list": [ 00:08:23.310 { 00:08:23.310 "name": null, 00:08:23.310 "uuid": "375506cc-6b3b-4f6c-a3aa-5c228f4c3f2d", 00:08:23.310 "is_configured": false, 00:08:23.310 "data_offset": 0, 00:08:23.310 "data_size": 65536 00:08:23.310 }, 00:08:23.310 { 00:08:23.310 "name": null, 00:08:23.310 "uuid": "bc3c309f-4f4e-47f5-80a4-ef2453704b81", 00:08:23.310 "is_configured": false, 00:08:23.310 "data_offset": 0, 00:08:23.310 "data_size": 65536 00:08:23.310 }, 00:08:23.310 { 00:08:23.310 "name": "BaseBdev3", 00:08:23.310 "uuid": "1a9d1235-b9fc-4f60-96f5-82722a0b6e66", 00:08:23.310 "is_configured": true, 00:08:23.310 "data_offset": 0, 00:08:23.310 "data_size": 65536 00:08:23.310 } 00:08:23.310 ] 00:08:23.310 }' 00:08:23.310 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.310 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.878 [2024-11-28 16:21:15.432419] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.878 "name": "Existed_Raid", 00:08:23.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.878 "strip_size_kb": 64, 00:08:23.878 "state": "configuring", 00:08:23.878 "raid_level": "concat", 00:08:23.878 "superblock": false, 00:08:23.878 "num_base_bdevs": 3, 00:08:23.878 "num_base_bdevs_discovered": 2, 00:08:23.878 "num_base_bdevs_operational": 3, 00:08:23.878 "base_bdevs_list": [ 00:08:23.878 { 00:08:23.878 "name": null, 00:08:23.878 "uuid": "375506cc-6b3b-4f6c-a3aa-5c228f4c3f2d", 00:08:23.878 "is_configured": false, 00:08:23.878 "data_offset": 0, 00:08:23.878 "data_size": 65536 00:08:23.878 }, 00:08:23.878 { 00:08:23.878 "name": "BaseBdev2", 00:08:23.878 "uuid": "bc3c309f-4f4e-47f5-80a4-ef2453704b81", 00:08:23.878 "is_configured": true, 00:08:23.878 "data_offset": 0, 00:08:23.878 "data_size": 65536 00:08:23.878 }, 00:08:23.878 { 00:08:23.878 "name": "BaseBdev3", 00:08:23.878 "uuid": "1a9d1235-b9fc-4f60-96f5-82722a0b6e66", 00:08:23.878 "is_configured": true, 00:08:23.878 "data_offset": 0, 00:08:23.878 "data_size": 65536 00:08:23.878 } 00:08:23.878 ] 00:08:23.878 }' 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.878 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.138 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.138 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.138 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.138 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:24.138 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.138 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:24.138 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:24.138 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.138 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.138 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.398 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.398 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 375506cc-6b3b-4f6c-a3aa-5c228f4c3f2d 00:08:24.398 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.398 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.398 [2024-11-28 16:21:15.946488] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:24.398 [2024-11-28 16:21:15.946529] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:24.398 [2024-11-28 16:21:15.946539] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:24.398 [2024-11-28 16:21:15.946803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:24.398 [2024-11-28 16:21:15.946940] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:24.398 [2024-11-28 16:21:15.946951] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:24.398 [2024-11-28 16:21:15.947139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.398 NewBaseBdev 00:08:24.398 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.398 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:24.398 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:24.398 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:24.398 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:24.398 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:24.398 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:24.398 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:24.398 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.398 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.398 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.398 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:24.398 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.398 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.398 [ 00:08:24.398 { 00:08:24.398 "name": "NewBaseBdev", 00:08:24.398 "aliases": [ 00:08:24.398 "375506cc-6b3b-4f6c-a3aa-5c228f4c3f2d" 00:08:24.398 ], 00:08:24.398 "product_name": "Malloc disk", 00:08:24.398 "block_size": 512, 00:08:24.398 "num_blocks": 65536, 00:08:24.398 "uuid": "375506cc-6b3b-4f6c-a3aa-5c228f4c3f2d", 00:08:24.398 "assigned_rate_limits": { 00:08:24.398 "rw_ios_per_sec": 0, 00:08:24.398 "rw_mbytes_per_sec": 0, 00:08:24.398 "r_mbytes_per_sec": 0, 00:08:24.398 "w_mbytes_per_sec": 0 00:08:24.398 }, 00:08:24.398 "claimed": true, 00:08:24.398 "claim_type": "exclusive_write", 00:08:24.398 "zoned": false, 00:08:24.398 "supported_io_types": { 00:08:24.398 "read": true, 00:08:24.398 "write": true, 00:08:24.398 "unmap": true, 00:08:24.398 "flush": true, 00:08:24.398 "reset": true, 00:08:24.398 "nvme_admin": false, 00:08:24.398 "nvme_io": false, 00:08:24.398 "nvme_io_md": false, 00:08:24.398 "write_zeroes": true, 00:08:24.398 "zcopy": true, 00:08:24.398 "get_zone_info": false, 00:08:24.398 "zone_management": false, 00:08:24.398 "zone_append": false, 00:08:24.398 "compare": false, 00:08:24.398 "compare_and_write": false, 00:08:24.398 "abort": true, 00:08:24.398 "seek_hole": false, 00:08:24.398 "seek_data": false, 00:08:24.398 "copy": true, 00:08:24.398 "nvme_iov_md": false 00:08:24.398 }, 00:08:24.398 "memory_domains": [ 00:08:24.398 { 00:08:24.398 "dma_device_id": "system", 00:08:24.398 "dma_device_type": 1 00:08:24.398 }, 00:08:24.398 { 00:08:24.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.398 "dma_device_type": 2 00:08:24.398 } 00:08:24.398 ], 00:08:24.398 "driver_specific": {} 00:08:24.398 } 00:08:24.398 ] 00:08:24.399 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.399 16:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:24.399 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:24.399 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.399 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.399 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:24.399 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.399 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.399 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.399 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.399 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.399 16:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.399 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.399 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.399 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.399 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.399 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.399 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.399 "name": "Existed_Raid", 00:08:24.399 "uuid": "be7fd86b-c148-41e7-bb54-7574f649fc10", 00:08:24.399 "strip_size_kb": 64, 00:08:24.399 "state": "online", 00:08:24.399 "raid_level": "concat", 00:08:24.399 "superblock": false, 00:08:24.399 "num_base_bdevs": 3, 00:08:24.399 "num_base_bdevs_discovered": 3, 00:08:24.399 "num_base_bdevs_operational": 3, 00:08:24.399 "base_bdevs_list": [ 00:08:24.399 { 00:08:24.399 "name": "NewBaseBdev", 00:08:24.399 "uuid": "375506cc-6b3b-4f6c-a3aa-5c228f4c3f2d", 00:08:24.399 "is_configured": true, 00:08:24.399 "data_offset": 0, 00:08:24.399 "data_size": 65536 00:08:24.399 }, 00:08:24.399 { 00:08:24.399 "name": "BaseBdev2", 00:08:24.399 "uuid": "bc3c309f-4f4e-47f5-80a4-ef2453704b81", 00:08:24.399 "is_configured": true, 00:08:24.399 "data_offset": 0, 00:08:24.399 "data_size": 65536 00:08:24.399 }, 00:08:24.399 { 00:08:24.399 "name": "BaseBdev3", 00:08:24.399 "uuid": "1a9d1235-b9fc-4f60-96f5-82722a0b6e66", 00:08:24.399 "is_configured": true, 00:08:24.399 "data_offset": 0, 00:08:24.399 "data_size": 65536 00:08:24.399 } 00:08:24.399 ] 00:08:24.399 }' 00:08:24.399 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.399 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.969 [2024-11-28 16:21:16.457947] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:24.969 "name": "Existed_Raid", 00:08:24.969 "aliases": [ 00:08:24.969 "be7fd86b-c148-41e7-bb54-7574f649fc10" 00:08:24.969 ], 00:08:24.969 "product_name": "Raid Volume", 00:08:24.969 "block_size": 512, 00:08:24.969 "num_blocks": 196608, 00:08:24.969 "uuid": "be7fd86b-c148-41e7-bb54-7574f649fc10", 00:08:24.969 "assigned_rate_limits": { 00:08:24.969 "rw_ios_per_sec": 0, 00:08:24.969 "rw_mbytes_per_sec": 0, 00:08:24.969 "r_mbytes_per_sec": 0, 00:08:24.969 "w_mbytes_per_sec": 0 00:08:24.969 }, 00:08:24.969 "claimed": false, 00:08:24.969 "zoned": false, 00:08:24.969 "supported_io_types": { 00:08:24.969 "read": true, 00:08:24.969 "write": true, 00:08:24.969 "unmap": true, 00:08:24.969 "flush": true, 00:08:24.969 "reset": true, 00:08:24.969 "nvme_admin": false, 00:08:24.969 "nvme_io": false, 00:08:24.969 "nvme_io_md": false, 00:08:24.969 "write_zeroes": true, 00:08:24.969 "zcopy": false, 00:08:24.969 "get_zone_info": false, 00:08:24.969 "zone_management": false, 00:08:24.969 "zone_append": false, 00:08:24.969 "compare": false, 00:08:24.969 "compare_and_write": false, 00:08:24.969 "abort": false, 00:08:24.969 "seek_hole": false, 00:08:24.969 "seek_data": false, 00:08:24.969 "copy": false, 00:08:24.969 "nvme_iov_md": false 00:08:24.969 }, 00:08:24.969 "memory_domains": [ 00:08:24.969 { 00:08:24.969 "dma_device_id": "system", 00:08:24.969 "dma_device_type": 1 00:08:24.969 }, 00:08:24.969 { 00:08:24.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.969 "dma_device_type": 2 00:08:24.969 }, 00:08:24.969 { 00:08:24.969 "dma_device_id": "system", 00:08:24.969 "dma_device_type": 1 00:08:24.969 }, 00:08:24.969 { 00:08:24.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.969 "dma_device_type": 2 00:08:24.969 }, 00:08:24.969 { 00:08:24.969 "dma_device_id": "system", 00:08:24.969 "dma_device_type": 1 00:08:24.969 }, 00:08:24.969 { 00:08:24.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.969 "dma_device_type": 2 00:08:24.969 } 00:08:24.969 ], 00:08:24.969 "driver_specific": { 00:08:24.969 "raid": { 00:08:24.969 "uuid": "be7fd86b-c148-41e7-bb54-7574f649fc10", 00:08:24.969 "strip_size_kb": 64, 00:08:24.969 "state": "online", 00:08:24.969 "raid_level": "concat", 00:08:24.969 "superblock": false, 00:08:24.969 "num_base_bdevs": 3, 00:08:24.969 "num_base_bdevs_discovered": 3, 00:08:24.969 "num_base_bdevs_operational": 3, 00:08:24.969 "base_bdevs_list": [ 00:08:24.969 { 00:08:24.969 "name": "NewBaseBdev", 00:08:24.969 "uuid": "375506cc-6b3b-4f6c-a3aa-5c228f4c3f2d", 00:08:24.969 "is_configured": true, 00:08:24.969 "data_offset": 0, 00:08:24.969 "data_size": 65536 00:08:24.969 }, 00:08:24.969 { 00:08:24.969 "name": "BaseBdev2", 00:08:24.969 "uuid": "bc3c309f-4f4e-47f5-80a4-ef2453704b81", 00:08:24.969 "is_configured": true, 00:08:24.969 "data_offset": 0, 00:08:24.969 "data_size": 65536 00:08:24.969 }, 00:08:24.969 { 00:08:24.969 "name": "BaseBdev3", 00:08:24.969 "uuid": "1a9d1235-b9fc-4f60-96f5-82722a0b6e66", 00:08:24.969 "is_configured": true, 00:08:24.969 "data_offset": 0, 00:08:24.969 "data_size": 65536 00:08:24.969 } 00:08:24.969 ] 00:08:24.969 } 00:08:24.969 } 00:08:24.969 }' 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:24.969 BaseBdev2 00:08:24.969 BaseBdev3' 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.969 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.970 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.970 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.970 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:24.970 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.970 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.970 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.970 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.229 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.229 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.229 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:25.229 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.229 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.229 [2024-11-28 16:21:16.749137] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:25.229 [2024-11-28 16:21:16.749202] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:25.229 [2024-11-28 16:21:16.749272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.229 [2024-11-28 16:21:16.749324] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:25.229 [2024-11-28 16:21:16.749335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:25.229 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.229 16:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76769 00:08:25.229 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 76769 ']' 00:08:25.229 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 76769 00:08:25.229 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:25.229 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:25.229 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76769 00:08:25.229 killing process with pid 76769 00:08:25.229 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:25.229 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:25.229 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76769' 00:08:25.229 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 76769 00:08:25.229 [2024-11-28 16:21:16.799422] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:25.229 16:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 76769 00:08:25.229 [2024-11-28 16:21:16.830901] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:25.489 16:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:25.489 ************************************ 00:08:25.489 END TEST raid_state_function_test 00:08:25.489 ************************************ 00:08:25.489 00:08:25.489 real 0m8.689s 00:08:25.489 user 0m14.831s 00:08:25.489 sys 0m1.722s 00:08:25.489 16:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.489 16:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.489 16:21:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:25.489 16:21:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:25.489 16:21:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.489 16:21:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:25.489 ************************************ 00:08:25.489 START TEST raid_state_function_test_sb 00:08:25.489 ************************************ 00:08:25.489 16:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:08:25.489 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:25.489 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:25.489 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:25.489 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:25.490 Process raid pid: 77368 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77368 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77368' 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77368 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77368 ']' 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.490 16:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.490 [2024-11-28 16:21:17.229730] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:25.490 [2024-11-28 16:21:17.229869] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.762 [2024-11-28 16:21:17.387289] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.762 [2024-11-28 16:21:17.431340] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.762 [2024-11-28 16:21:17.473587] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.762 [2024-11-28 16:21:17.473621] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.347 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.347 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:26.347 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:26.347 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.347 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.347 [2024-11-28 16:21:18.066897] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:26.347 [2024-11-28 16:21:18.066945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:26.347 [2024-11-28 16:21:18.066959] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:26.347 [2024-11-28 16:21:18.066970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:26.347 [2024-11-28 16:21:18.066976] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:26.347 [2024-11-28 16:21:18.066987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:26.347 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.347 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:26.347 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.347 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.347 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.347 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.347 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.347 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.347 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.347 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.347 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.347 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.347 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.347 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.347 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.347 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.607 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.607 "name": "Existed_Raid", 00:08:26.607 "uuid": "817c6328-2521-4175-aede-d548e208fb81", 00:08:26.607 "strip_size_kb": 64, 00:08:26.607 "state": "configuring", 00:08:26.607 "raid_level": "concat", 00:08:26.607 "superblock": true, 00:08:26.607 "num_base_bdevs": 3, 00:08:26.607 "num_base_bdevs_discovered": 0, 00:08:26.607 "num_base_bdevs_operational": 3, 00:08:26.607 "base_bdevs_list": [ 00:08:26.607 { 00:08:26.607 "name": "BaseBdev1", 00:08:26.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.607 "is_configured": false, 00:08:26.607 "data_offset": 0, 00:08:26.607 "data_size": 0 00:08:26.607 }, 00:08:26.607 { 00:08:26.607 "name": "BaseBdev2", 00:08:26.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.607 "is_configured": false, 00:08:26.607 "data_offset": 0, 00:08:26.607 "data_size": 0 00:08:26.607 }, 00:08:26.607 { 00:08:26.607 "name": "BaseBdev3", 00:08:26.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.607 "is_configured": false, 00:08:26.607 "data_offset": 0, 00:08:26.607 "data_size": 0 00:08:26.607 } 00:08:26.607 ] 00:08:26.607 }' 00:08:26.607 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.607 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.867 [2024-11-28 16:21:18.490061] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:26.867 [2024-11-28 16:21:18.490150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.867 [2024-11-28 16:21:18.502073] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:26.867 [2024-11-28 16:21:18.502148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:26.867 [2024-11-28 16:21:18.502175] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:26.867 [2024-11-28 16:21:18.502197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:26.867 [2024-11-28 16:21:18.502214] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:26.867 [2024-11-28 16:21:18.502234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.867 [2024-11-28 16:21:18.522787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:26.867 BaseBdev1 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.867 [ 00:08:26.867 { 00:08:26.867 "name": "BaseBdev1", 00:08:26.867 "aliases": [ 00:08:26.867 "282fe50c-5ff6-47e8-81cc-4eba20d34ff4" 00:08:26.867 ], 00:08:26.867 "product_name": "Malloc disk", 00:08:26.867 "block_size": 512, 00:08:26.867 "num_blocks": 65536, 00:08:26.867 "uuid": "282fe50c-5ff6-47e8-81cc-4eba20d34ff4", 00:08:26.867 "assigned_rate_limits": { 00:08:26.867 "rw_ios_per_sec": 0, 00:08:26.867 "rw_mbytes_per_sec": 0, 00:08:26.867 "r_mbytes_per_sec": 0, 00:08:26.867 "w_mbytes_per_sec": 0 00:08:26.867 }, 00:08:26.867 "claimed": true, 00:08:26.867 "claim_type": "exclusive_write", 00:08:26.867 "zoned": false, 00:08:26.867 "supported_io_types": { 00:08:26.867 "read": true, 00:08:26.867 "write": true, 00:08:26.867 "unmap": true, 00:08:26.867 "flush": true, 00:08:26.867 "reset": true, 00:08:26.867 "nvme_admin": false, 00:08:26.867 "nvme_io": false, 00:08:26.867 "nvme_io_md": false, 00:08:26.867 "write_zeroes": true, 00:08:26.867 "zcopy": true, 00:08:26.867 "get_zone_info": false, 00:08:26.867 "zone_management": false, 00:08:26.867 "zone_append": false, 00:08:26.867 "compare": false, 00:08:26.867 "compare_and_write": false, 00:08:26.867 "abort": true, 00:08:26.867 "seek_hole": false, 00:08:26.867 "seek_data": false, 00:08:26.867 "copy": true, 00:08:26.867 "nvme_iov_md": false 00:08:26.867 }, 00:08:26.867 "memory_domains": [ 00:08:26.867 { 00:08:26.867 "dma_device_id": "system", 00:08:26.867 "dma_device_type": 1 00:08:26.867 }, 00:08:26.867 { 00:08:26.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.867 "dma_device_type": 2 00:08:26.867 } 00:08:26.867 ], 00:08:26.867 "driver_specific": {} 00:08:26.867 } 00:08:26.867 ] 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.867 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.868 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.868 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.868 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.868 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.868 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.868 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.868 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.868 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.868 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.868 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.868 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.868 "name": "Existed_Raid", 00:08:26.868 "uuid": "19c95d16-b9f6-4e9b-904d-fbe05afaeaba", 00:08:26.868 "strip_size_kb": 64, 00:08:26.868 "state": "configuring", 00:08:26.868 "raid_level": "concat", 00:08:26.868 "superblock": true, 00:08:26.868 "num_base_bdevs": 3, 00:08:26.868 "num_base_bdevs_discovered": 1, 00:08:26.868 "num_base_bdevs_operational": 3, 00:08:26.868 "base_bdevs_list": [ 00:08:26.868 { 00:08:26.868 "name": "BaseBdev1", 00:08:26.868 "uuid": "282fe50c-5ff6-47e8-81cc-4eba20d34ff4", 00:08:26.868 "is_configured": true, 00:08:26.868 "data_offset": 2048, 00:08:26.868 "data_size": 63488 00:08:26.868 }, 00:08:26.868 { 00:08:26.868 "name": "BaseBdev2", 00:08:26.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.868 "is_configured": false, 00:08:26.868 "data_offset": 0, 00:08:26.868 "data_size": 0 00:08:26.868 }, 00:08:26.868 { 00:08:26.868 "name": "BaseBdev3", 00:08:26.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.868 "is_configured": false, 00:08:26.868 "data_offset": 0, 00:08:26.868 "data_size": 0 00:08:26.868 } 00:08:26.868 ] 00:08:26.868 }' 00:08:26.868 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.868 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.436 16:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:27.436 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.436 16:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.436 [2024-11-28 16:21:19.001990] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:27.436 [2024-11-28 16:21:19.002046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:27.436 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.436 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:27.436 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.436 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.436 [2024-11-28 16:21:19.010017] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:27.436 [2024-11-28 16:21:19.011825] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:27.436 [2024-11-28 16:21:19.011883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:27.436 [2024-11-28 16:21:19.011894] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:27.436 [2024-11-28 16:21:19.011903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:27.436 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.436 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:27.436 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:27.436 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:27.436 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.436 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.436 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:27.436 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.436 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.436 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.436 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.436 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.436 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.436 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.436 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.437 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.437 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.437 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.437 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.437 "name": "Existed_Raid", 00:08:27.437 "uuid": "a1a8aeaa-cb54-4b20-a6ec-dc2149024518", 00:08:27.437 "strip_size_kb": 64, 00:08:27.437 "state": "configuring", 00:08:27.437 "raid_level": "concat", 00:08:27.437 "superblock": true, 00:08:27.437 "num_base_bdevs": 3, 00:08:27.437 "num_base_bdevs_discovered": 1, 00:08:27.437 "num_base_bdevs_operational": 3, 00:08:27.437 "base_bdevs_list": [ 00:08:27.437 { 00:08:27.437 "name": "BaseBdev1", 00:08:27.437 "uuid": "282fe50c-5ff6-47e8-81cc-4eba20d34ff4", 00:08:27.437 "is_configured": true, 00:08:27.437 "data_offset": 2048, 00:08:27.437 "data_size": 63488 00:08:27.437 }, 00:08:27.437 { 00:08:27.437 "name": "BaseBdev2", 00:08:27.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.437 "is_configured": false, 00:08:27.437 "data_offset": 0, 00:08:27.437 "data_size": 0 00:08:27.437 }, 00:08:27.437 { 00:08:27.437 "name": "BaseBdev3", 00:08:27.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.437 "is_configured": false, 00:08:27.437 "data_offset": 0, 00:08:27.437 "data_size": 0 00:08:27.437 } 00:08:27.437 ] 00:08:27.437 }' 00:08:27.437 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.437 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.005 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:28.005 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.005 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.006 [2024-11-28 16:21:19.493185] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:28.006 BaseBdev2 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.006 [ 00:08:28.006 { 00:08:28.006 "name": "BaseBdev2", 00:08:28.006 "aliases": [ 00:08:28.006 "ed82c330-9326-4900-b7a3-aed044ccf5df" 00:08:28.006 ], 00:08:28.006 "product_name": "Malloc disk", 00:08:28.006 "block_size": 512, 00:08:28.006 "num_blocks": 65536, 00:08:28.006 "uuid": "ed82c330-9326-4900-b7a3-aed044ccf5df", 00:08:28.006 "assigned_rate_limits": { 00:08:28.006 "rw_ios_per_sec": 0, 00:08:28.006 "rw_mbytes_per_sec": 0, 00:08:28.006 "r_mbytes_per_sec": 0, 00:08:28.006 "w_mbytes_per_sec": 0 00:08:28.006 }, 00:08:28.006 "claimed": true, 00:08:28.006 "claim_type": "exclusive_write", 00:08:28.006 "zoned": false, 00:08:28.006 "supported_io_types": { 00:08:28.006 "read": true, 00:08:28.006 "write": true, 00:08:28.006 "unmap": true, 00:08:28.006 "flush": true, 00:08:28.006 "reset": true, 00:08:28.006 "nvme_admin": false, 00:08:28.006 "nvme_io": false, 00:08:28.006 "nvme_io_md": false, 00:08:28.006 "write_zeroes": true, 00:08:28.006 "zcopy": true, 00:08:28.006 "get_zone_info": false, 00:08:28.006 "zone_management": false, 00:08:28.006 "zone_append": false, 00:08:28.006 "compare": false, 00:08:28.006 "compare_and_write": false, 00:08:28.006 "abort": true, 00:08:28.006 "seek_hole": false, 00:08:28.006 "seek_data": false, 00:08:28.006 "copy": true, 00:08:28.006 "nvme_iov_md": false 00:08:28.006 }, 00:08:28.006 "memory_domains": [ 00:08:28.006 { 00:08:28.006 "dma_device_id": "system", 00:08:28.006 "dma_device_type": 1 00:08:28.006 }, 00:08:28.006 { 00:08:28.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.006 "dma_device_type": 2 00:08:28.006 } 00:08:28.006 ], 00:08:28.006 "driver_specific": {} 00:08:28.006 } 00:08:28.006 ] 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.006 "name": "Existed_Raid", 00:08:28.006 "uuid": "a1a8aeaa-cb54-4b20-a6ec-dc2149024518", 00:08:28.006 "strip_size_kb": 64, 00:08:28.006 "state": "configuring", 00:08:28.006 "raid_level": "concat", 00:08:28.006 "superblock": true, 00:08:28.006 "num_base_bdevs": 3, 00:08:28.006 "num_base_bdevs_discovered": 2, 00:08:28.006 "num_base_bdevs_operational": 3, 00:08:28.006 "base_bdevs_list": [ 00:08:28.006 { 00:08:28.006 "name": "BaseBdev1", 00:08:28.006 "uuid": "282fe50c-5ff6-47e8-81cc-4eba20d34ff4", 00:08:28.006 "is_configured": true, 00:08:28.006 "data_offset": 2048, 00:08:28.006 "data_size": 63488 00:08:28.006 }, 00:08:28.006 { 00:08:28.006 "name": "BaseBdev2", 00:08:28.006 "uuid": "ed82c330-9326-4900-b7a3-aed044ccf5df", 00:08:28.006 "is_configured": true, 00:08:28.006 "data_offset": 2048, 00:08:28.006 "data_size": 63488 00:08:28.006 }, 00:08:28.006 { 00:08:28.006 "name": "BaseBdev3", 00:08:28.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.006 "is_configured": false, 00:08:28.006 "data_offset": 0, 00:08:28.006 "data_size": 0 00:08:28.006 } 00:08:28.006 ] 00:08:28.006 }' 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.006 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.266 [2024-11-28 16:21:19.919381] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:28.266 [2024-11-28 16:21:19.919659] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:28.266 [2024-11-28 16:21:19.919732] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:28.266 BaseBdev3 00:08:28.266 [2024-11-28 16:21:19.920054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:28.266 [2024-11-28 16:21:19.920166] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:28.266 [2024-11-28 16:21:19.920180] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:28.266 [2024-11-28 16:21:19.920298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.266 [ 00:08:28.266 { 00:08:28.266 "name": "BaseBdev3", 00:08:28.266 "aliases": [ 00:08:28.266 "5f99ce55-d986-4f2c-93f4-bff264ca92b1" 00:08:28.266 ], 00:08:28.266 "product_name": "Malloc disk", 00:08:28.266 "block_size": 512, 00:08:28.266 "num_blocks": 65536, 00:08:28.266 "uuid": "5f99ce55-d986-4f2c-93f4-bff264ca92b1", 00:08:28.266 "assigned_rate_limits": { 00:08:28.266 "rw_ios_per_sec": 0, 00:08:28.266 "rw_mbytes_per_sec": 0, 00:08:28.266 "r_mbytes_per_sec": 0, 00:08:28.266 "w_mbytes_per_sec": 0 00:08:28.266 }, 00:08:28.266 "claimed": true, 00:08:28.266 "claim_type": "exclusive_write", 00:08:28.266 "zoned": false, 00:08:28.266 "supported_io_types": { 00:08:28.266 "read": true, 00:08:28.266 "write": true, 00:08:28.266 "unmap": true, 00:08:28.266 "flush": true, 00:08:28.266 "reset": true, 00:08:28.266 "nvme_admin": false, 00:08:28.266 "nvme_io": false, 00:08:28.266 "nvme_io_md": false, 00:08:28.266 "write_zeroes": true, 00:08:28.266 "zcopy": true, 00:08:28.266 "get_zone_info": false, 00:08:28.266 "zone_management": false, 00:08:28.266 "zone_append": false, 00:08:28.266 "compare": false, 00:08:28.266 "compare_and_write": false, 00:08:28.266 "abort": true, 00:08:28.266 "seek_hole": false, 00:08:28.266 "seek_data": false, 00:08:28.266 "copy": true, 00:08:28.266 "nvme_iov_md": false 00:08:28.266 }, 00:08:28.266 "memory_domains": [ 00:08:28.266 { 00:08:28.266 "dma_device_id": "system", 00:08:28.266 "dma_device_type": 1 00:08:28.266 }, 00:08:28.266 { 00:08:28.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.266 "dma_device_type": 2 00:08:28.266 } 00:08:28.266 ], 00:08:28.266 "driver_specific": {} 00:08:28.266 } 00:08:28.266 ] 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.266 16:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.266 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.266 "name": "Existed_Raid", 00:08:28.266 "uuid": "a1a8aeaa-cb54-4b20-a6ec-dc2149024518", 00:08:28.266 "strip_size_kb": 64, 00:08:28.266 "state": "online", 00:08:28.266 "raid_level": "concat", 00:08:28.266 "superblock": true, 00:08:28.266 "num_base_bdevs": 3, 00:08:28.266 "num_base_bdevs_discovered": 3, 00:08:28.266 "num_base_bdevs_operational": 3, 00:08:28.266 "base_bdevs_list": [ 00:08:28.266 { 00:08:28.266 "name": "BaseBdev1", 00:08:28.266 "uuid": "282fe50c-5ff6-47e8-81cc-4eba20d34ff4", 00:08:28.266 "is_configured": true, 00:08:28.266 "data_offset": 2048, 00:08:28.266 "data_size": 63488 00:08:28.266 }, 00:08:28.267 { 00:08:28.267 "name": "BaseBdev2", 00:08:28.267 "uuid": "ed82c330-9326-4900-b7a3-aed044ccf5df", 00:08:28.267 "is_configured": true, 00:08:28.267 "data_offset": 2048, 00:08:28.267 "data_size": 63488 00:08:28.267 }, 00:08:28.267 { 00:08:28.267 "name": "BaseBdev3", 00:08:28.267 "uuid": "5f99ce55-d986-4f2c-93f4-bff264ca92b1", 00:08:28.267 "is_configured": true, 00:08:28.267 "data_offset": 2048, 00:08:28.267 "data_size": 63488 00:08:28.267 } 00:08:28.267 ] 00:08:28.267 }' 00:08:28.267 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.267 16:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.837 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:28.837 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:28.837 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:28.837 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:28.837 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:28.837 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:28.837 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:28.837 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:28.837 16:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.837 16:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.837 [2024-11-28 16:21:20.346992] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.837 16:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.837 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:28.837 "name": "Existed_Raid", 00:08:28.837 "aliases": [ 00:08:28.837 "a1a8aeaa-cb54-4b20-a6ec-dc2149024518" 00:08:28.837 ], 00:08:28.837 "product_name": "Raid Volume", 00:08:28.837 "block_size": 512, 00:08:28.837 "num_blocks": 190464, 00:08:28.837 "uuid": "a1a8aeaa-cb54-4b20-a6ec-dc2149024518", 00:08:28.837 "assigned_rate_limits": { 00:08:28.837 "rw_ios_per_sec": 0, 00:08:28.837 "rw_mbytes_per_sec": 0, 00:08:28.837 "r_mbytes_per_sec": 0, 00:08:28.837 "w_mbytes_per_sec": 0 00:08:28.837 }, 00:08:28.837 "claimed": false, 00:08:28.837 "zoned": false, 00:08:28.837 "supported_io_types": { 00:08:28.837 "read": true, 00:08:28.837 "write": true, 00:08:28.837 "unmap": true, 00:08:28.837 "flush": true, 00:08:28.837 "reset": true, 00:08:28.837 "nvme_admin": false, 00:08:28.837 "nvme_io": false, 00:08:28.837 "nvme_io_md": false, 00:08:28.837 "write_zeroes": true, 00:08:28.837 "zcopy": false, 00:08:28.837 "get_zone_info": false, 00:08:28.837 "zone_management": false, 00:08:28.837 "zone_append": false, 00:08:28.837 "compare": false, 00:08:28.837 "compare_and_write": false, 00:08:28.837 "abort": false, 00:08:28.837 "seek_hole": false, 00:08:28.837 "seek_data": false, 00:08:28.837 "copy": false, 00:08:28.837 "nvme_iov_md": false 00:08:28.837 }, 00:08:28.837 "memory_domains": [ 00:08:28.837 { 00:08:28.837 "dma_device_id": "system", 00:08:28.837 "dma_device_type": 1 00:08:28.837 }, 00:08:28.837 { 00:08:28.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.837 "dma_device_type": 2 00:08:28.837 }, 00:08:28.837 { 00:08:28.837 "dma_device_id": "system", 00:08:28.837 "dma_device_type": 1 00:08:28.837 }, 00:08:28.837 { 00:08:28.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.837 "dma_device_type": 2 00:08:28.837 }, 00:08:28.837 { 00:08:28.837 "dma_device_id": "system", 00:08:28.837 "dma_device_type": 1 00:08:28.837 }, 00:08:28.837 { 00:08:28.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.837 "dma_device_type": 2 00:08:28.837 } 00:08:28.837 ], 00:08:28.837 "driver_specific": { 00:08:28.837 "raid": { 00:08:28.837 "uuid": "a1a8aeaa-cb54-4b20-a6ec-dc2149024518", 00:08:28.837 "strip_size_kb": 64, 00:08:28.837 "state": "online", 00:08:28.837 "raid_level": "concat", 00:08:28.837 "superblock": true, 00:08:28.837 "num_base_bdevs": 3, 00:08:28.837 "num_base_bdevs_discovered": 3, 00:08:28.837 "num_base_bdevs_operational": 3, 00:08:28.837 "base_bdevs_list": [ 00:08:28.837 { 00:08:28.837 "name": "BaseBdev1", 00:08:28.837 "uuid": "282fe50c-5ff6-47e8-81cc-4eba20d34ff4", 00:08:28.837 "is_configured": true, 00:08:28.837 "data_offset": 2048, 00:08:28.837 "data_size": 63488 00:08:28.837 }, 00:08:28.837 { 00:08:28.837 "name": "BaseBdev2", 00:08:28.837 "uuid": "ed82c330-9326-4900-b7a3-aed044ccf5df", 00:08:28.837 "is_configured": true, 00:08:28.837 "data_offset": 2048, 00:08:28.837 "data_size": 63488 00:08:28.837 }, 00:08:28.837 { 00:08:28.837 "name": "BaseBdev3", 00:08:28.837 "uuid": "5f99ce55-d986-4f2c-93f4-bff264ca92b1", 00:08:28.837 "is_configured": true, 00:08:28.837 "data_offset": 2048, 00:08:28.837 "data_size": 63488 00:08:28.837 } 00:08:28.837 ] 00:08:28.837 } 00:08:28.837 } 00:08:28.837 }' 00:08:28.837 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:28.837 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:28.837 BaseBdev2 00:08:28.837 BaseBdev3' 00:08:28.837 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.837 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:28.837 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.837 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:28.837 16:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.837 16:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.837 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.837 16:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.837 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.838 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.838 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.838 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:28.838 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.838 16:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.838 16:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.838 16:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.838 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.838 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.838 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.838 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:28.838 16:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.838 16:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.838 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.838 16:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.098 [2024-11-28 16:21:20.622279] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:29.098 [2024-11-28 16:21:20.622311] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:29.098 [2024-11-28 16:21:20.622372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.098 "name": "Existed_Raid", 00:08:29.098 "uuid": "a1a8aeaa-cb54-4b20-a6ec-dc2149024518", 00:08:29.098 "strip_size_kb": 64, 00:08:29.098 "state": "offline", 00:08:29.098 "raid_level": "concat", 00:08:29.098 "superblock": true, 00:08:29.098 "num_base_bdevs": 3, 00:08:29.098 "num_base_bdevs_discovered": 2, 00:08:29.098 "num_base_bdevs_operational": 2, 00:08:29.098 "base_bdevs_list": [ 00:08:29.098 { 00:08:29.098 "name": null, 00:08:29.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.098 "is_configured": false, 00:08:29.098 "data_offset": 0, 00:08:29.098 "data_size": 63488 00:08:29.098 }, 00:08:29.098 { 00:08:29.098 "name": "BaseBdev2", 00:08:29.098 "uuid": "ed82c330-9326-4900-b7a3-aed044ccf5df", 00:08:29.098 "is_configured": true, 00:08:29.098 "data_offset": 2048, 00:08:29.098 "data_size": 63488 00:08:29.098 }, 00:08:29.098 { 00:08:29.098 "name": "BaseBdev3", 00:08:29.098 "uuid": "5f99ce55-d986-4f2c-93f4-bff264ca92b1", 00:08:29.098 "is_configured": true, 00:08:29.098 "data_offset": 2048, 00:08:29.098 "data_size": 63488 00:08:29.098 } 00:08:29.098 ] 00:08:29.098 }' 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.098 16:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.358 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:29.358 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:29.358 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:29.358 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.358 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.358 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.358 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.358 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:29.358 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:29.358 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:29.358 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.358 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.358 [2024-11-28 16:21:21.084750] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:29.358 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.358 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:29.358 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:29.358 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:29.358 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.358 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.358 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.358 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.358 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.617 [2024-11-28 16:21:21.135849] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:29.617 [2024-11-28 16:21:21.135895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.617 BaseBdev2 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.617 [ 00:08:29.617 { 00:08:29.617 "name": "BaseBdev2", 00:08:29.617 "aliases": [ 00:08:29.617 "cfdad1bc-199a-4440-a7c3-5f1f9afed68d" 00:08:29.617 ], 00:08:29.617 "product_name": "Malloc disk", 00:08:29.617 "block_size": 512, 00:08:29.617 "num_blocks": 65536, 00:08:29.617 "uuid": "cfdad1bc-199a-4440-a7c3-5f1f9afed68d", 00:08:29.617 "assigned_rate_limits": { 00:08:29.617 "rw_ios_per_sec": 0, 00:08:29.617 "rw_mbytes_per_sec": 0, 00:08:29.617 "r_mbytes_per_sec": 0, 00:08:29.617 "w_mbytes_per_sec": 0 00:08:29.617 }, 00:08:29.617 "claimed": false, 00:08:29.617 "zoned": false, 00:08:29.617 "supported_io_types": { 00:08:29.617 "read": true, 00:08:29.617 "write": true, 00:08:29.617 "unmap": true, 00:08:29.617 "flush": true, 00:08:29.617 "reset": true, 00:08:29.617 "nvme_admin": false, 00:08:29.617 "nvme_io": false, 00:08:29.617 "nvme_io_md": false, 00:08:29.617 "write_zeroes": true, 00:08:29.617 "zcopy": true, 00:08:29.617 "get_zone_info": false, 00:08:29.617 "zone_management": false, 00:08:29.617 "zone_append": false, 00:08:29.617 "compare": false, 00:08:29.617 "compare_and_write": false, 00:08:29.617 "abort": true, 00:08:29.617 "seek_hole": false, 00:08:29.617 "seek_data": false, 00:08:29.617 "copy": true, 00:08:29.617 "nvme_iov_md": false 00:08:29.617 }, 00:08:29.617 "memory_domains": [ 00:08:29.617 { 00:08:29.617 "dma_device_id": "system", 00:08:29.617 "dma_device_type": 1 00:08:29.617 }, 00:08:29.617 { 00:08:29.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.617 "dma_device_type": 2 00:08:29.617 } 00:08:29.617 ], 00:08:29.617 "driver_specific": {} 00:08:29.617 } 00:08:29.617 ] 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.617 BaseBdev3 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.617 [ 00:08:29.617 { 00:08:29.617 "name": "BaseBdev3", 00:08:29.617 "aliases": [ 00:08:29.617 "021ce311-35d1-4851-97fd-9113c344172e" 00:08:29.617 ], 00:08:29.617 "product_name": "Malloc disk", 00:08:29.617 "block_size": 512, 00:08:29.617 "num_blocks": 65536, 00:08:29.617 "uuid": "021ce311-35d1-4851-97fd-9113c344172e", 00:08:29.617 "assigned_rate_limits": { 00:08:29.617 "rw_ios_per_sec": 0, 00:08:29.617 "rw_mbytes_per_sec": 0, 00:08:29.617 "r_mbytes_per_sec": 0, 00:08:29.617 "w_mbytes_per_sec": 0 00:08:29.617 }, 00:08:29.617 "claimed": false, 00:08:29.617 "zoned": false, 00:08:29.617 "supported_io_types": { 00:08:29.617 "read": true, 00:08:29.617 "write": true, 00:08:29.617 "unmap": true, 00:08:29.617 "flush": true, 00:08:29.617 "reset": true, 00:08:29.617 "nvme_admin": false, 00:08:29.617 "nvme_io": false, 00:08:29.617 "nvme_io_md": false, 00:08:29.617 "write_zeroes": true, 00:08:29.617 "zcopy": true, 00:08:29.617 "get_zone_info": false, 00:08:29.617 "zone_management": false, 00:08:29.617 "zone_append": false, 00:08:29.617 "compare": false, 00:08:29.617 "compare_and_write": false, 00:08:29.617 "abort": true, 00:08:29.617 "seek_hole": false, 00:08:29.617 "seek_data": false, 00:08:29.617 "copy": true, 00:08:29.617 "nvme_iov_md": false 00:08:29.617 }, 00:08:29.617 "memory_domains": [ 00:08:29.617 { 00:08:29.617 "dma_device_id": "system", 00:08:29.617 "dma_device_type": 1 00:08:29.617 }, 00:08:29.617 { 00:08:29.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.617 "dma_device_type": 2 00:08:29.617 } 00:08:29.617 ], 00:08:29.617 "driver_specific": {} 00:08:29.617 } 00:08:29.617 ] 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.617 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.617 [2024-11-28 16:21:21.310175] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:29.618 [2024-11-28 16:21:21.310260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:29.618 [2024-11-28 16:21:21.310302] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:29.618 [2024-11-28 16:21:21.312157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:29.618 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.618 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:29.618 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.618 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.618 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.618 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.618 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.618 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.618 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.618 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.618 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.618 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.618 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.618 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.618 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.618 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.618 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.618 "name": "Existed_Raid", 00:08:29.618 "uuid": "0f5db87c-1123-46ea-9aa3-ad1daf7f0aa6", 00:08:29.618 "strip_size_kb": 64, 00:08:29.618 "state": "configuring", 00:08:29.618 "raid_level": "concat", 00:08:29.618 "superblock": true, 00:08:29.618 "num_base_bdevs": 3, 00:08:29.618 "num_base_bdevs_discovered": 2, 00:08:29.618 "num_base_bdevs_operational": 3, 00:08:29.618 "base_bdevs_list": [ 00:08:29.618 { 00:08:29.618 "name": "BaseBdev1", 00:08:29.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.618 "is_configured": false, 00:08:29.618 "data_offset": 0, 00:08:29.618 "data_size": 0 00:08:29.618 }, 00:08:29.618 { 00:08:29.618 "name": "BaseBdev2", 00:08:29.618 "uuid": "cfdad1bc-199a-4440-a7c3-5f1f9afed68d", 00:08:29.618 "is_configured": true, 00:08:29.618 "data_offset": 2048, 00:08:29.618 "data_size": 63488 00:08:29.618 }, 00:08:29.618 { 00:08:29.618 "name": "BaseBdev3", 00:08:29.618 "uuid": "021ce311-35d1-4851-97fd-9113c344172e", 00:08:29.618 "is_configured": true, 00:08:29.618 "data_offset": 2048, 00:08:29.618 "data_size": 63488 00:08:29.618 } 00:08:29.618 ] 00:08:29.618 }' 00:08:29.618 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.618 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.186 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:30.186 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.186 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.186 [2024-11-28 16:21:21.781369] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:30.186 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.186 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:30.186 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.186 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.186 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.186 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.186 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.186 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.186 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.186 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.186 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.186 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.186 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.186 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.186 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.186 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.186 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.186 "name": "Existed_Raid", 00:08:30.186 "uuid": "0f5db87c-1123-46ea-9aa3-ad1daf7f0aa6", 00:08:30.186 "strip_size_kb": 64, 00:08:30.186 "state": "configuring", 00:08:30.186 "raid_level": "concat", 00:08:30.186 "superblock": true, 00:08:30.186 "num_base_bdevs": 3, 00:08:30.186 "num_base_bdevs_discovered": 1, 00:08:30.186 "num_base_bdevs_operational": 3, 00:08:30.186 "base_bdevs_list": [ 00:08:30.186 { 00:08:30.186 "name": "BaseBdev1", 00:08:30.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.186 "is_configured": false, 00:08:30.186 "data_offset": 0, 00:08:30.186 "data_size": 0 00:08:30.186 }, 00:08:30.186 { 00:08:30.186 "name": null, 00:08:30.186 "uuid": "cfdad1bc-199a-4440-a7c3-5f1f9afed68d", 00:08:30.186 "is_configured": false, 00:08:30.186 "data_offset": 0, 00:08:30.186 "data_size": 63488 00:08:30.186 }, 00:08:30.186 { 00:08:30.186 "name": "BaseBdev3", 00:08:30.186 "uuid": "021ce311-35d1-4851-97fd-9113c344172e", 00:08:30.186 "is_configured": true, 00:08:30.186 "data_offset": 2048, 00:08:30.186 "data_size": 63488 00:08:30.186 } 00:08:30.186 ] 00:08:30.186 }' 00:08:30.186 16:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.186 16:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.446 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.446 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.446 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.446 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:30.446 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.706 [2024-11-28 16:21:22.259374] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.706 BaseBdev1 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.706 [ 00:08:30.706 { 00:08:30.706 "name": "BaseBdev1", 00:08:30.706 "aliases": [ 00:08:30.706 "b207e378-7ecf-44d1-a4db-86440c0bf1dd" 00:08:30.706 ], 00:08:30.706 "product_name": "Malloc disk", 00:08:30.706 "block_size": 512, 00:08:30.706 "num_blocks": 65536, 00:08:30.706 "uuid": "b207e378-7ecf-44d1-a4db-86440c0bf1dd", 00:08:30.706 "assigned_rate_limits": { 00:08:30.706 "rw_ios_per_sec": 0, 00:08:30.706 "rw_mbytes_per_sec": 0, 00:08:30.706 "r_mbytes_per_sec": 0, 00:08:30.706 "w_mbytes_per_sec": 0 00:08:30.706 }, 00:08:30.706 "claimed": true, 00:08:30.706 "claim_type": "exclusive_write", 00:08:30.706 "zoned": false, 00:08:30.706 "supported_io_types": { 00:08:30.706 "read": true, 00:08:30.706 "write": true, 00:08:30.706 "unmap": true, 00:08:30.706 "flush": true, 00:08:30.706 "reset": true, 00:08:30.706 "nvme_admin": false, 00:08:30.706 "nvme_io": false, 00:08:30.706 "nvme_io_md": false, 00:08:30.706 "write_zeroes": true, 00:08:30.706 "zcopy": true, 00:08:30.706 "get_zone_info": false, 00:08:30.706 "zone_management": false, 00:08:30.706 "zone_append": false, 00:08:30.706 "compare": false, 00:08:30.706 "compare_and_write": false, 00:08:30.706 "abort": true, 00:08:30.706 "seek_hole": false, 00:08:30.706 "seek_data": false, 00:08:30.706 "copy": true, 00:08:30.706 "nvme_iov_md": false 00:08:30.706 }, 00:08:30.706 "memory_domains": [ 00:08:30.706 { 00:08:30.706 "dma_device_id": "system", 00:08:30.706 "dma_device_type": 1 00:08:30.706 }, 00:08:30.706 { 00:08:30.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.706 "dma_device_type": 2 00:08:30.706 } 00:08:30.706 ], 00:08:30.706 "driver_specific": {} 00:08:30.706 } 00:08:30.706 ] 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.706 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.706 "name": "Existed_Raid", 00:08:30.706 "uuid": "0f5db87c-1123-46ea-9aa3-ad1daf7f0aa6", 00:08:30.706 "strip_size_kb": 64, 00:08:30.706 "state": "configuring", 00:08:30.706 "raid_level": "concat", 00:08:30.706 "superblock": true, 00:08:30.706 "num_base_bdevs": 3, 00:08:30.706 "num_base_bdevs_discovered": 2, 00:08:30.706 "num_base_bdevs_operational": 3, 00:08:30.706 "base_bdevs_list": [ 00:08:30.706 { 00:08:30.706 "name": "BaseBdev1", 00:08:30.706 "uuid": "b207e378-7ecf-44d1-a4db-86440c0bf1dd", 00:08:30.706 "is_configured": true, 00:08:30.706 "data_offset": 2048, 00:08:30.706 "data_size": 63488 00:08:30.706 }, 00:08:30.706 { 00:08:30.706 "name": null, 00:08:30.706 "uuid": "cfdad1bc-199a-4440-a7c3-5f1f9afed68d", 00:08:30.706 "is_configured": false, 00:08:30.706 "data_offset": 0, 00:08:30.706 "data_size": 63488 00:08:30.706 }, 00:08:30.706 { 00:08:30.706 "name": "BaseBdev3", 00:08:30.707 "uuid": "021ce311-35d1-4851-97fd-9113c344172e", 00:08:30.707 "is_configured": true, 00:08:30.707 "data_offset": 2048, 00:08:30.707 "data_size": 63488 00:08:30.707 } 00:08:30.707 ] 00:08:30.707 }' 00:08:30.707 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.707 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.966 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.966 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:30.966 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.966 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.966 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.226 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:31.226 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:31.226 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.226 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.226 [2024-11-28 16:21:22.746586] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:31.226 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.226 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:31.226 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.226 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.226 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.226 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.226 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.226 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.226 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.226 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.226 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.226 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.226 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.226 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.226 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.226 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.226 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.226 "name": "Existed_Raid", 00:08:31.226 "uuid": "0f5db87c-1123-46ea-9aa3-ad1daf7f0aa6", 00:08:31.226 "strip_size_kb": 64, 00:08:31.226 "state": "configuring", 00:08:31.226 "raid_level": "concat", 00:08:31.226 "superblock": true, 00:08:31.226 "num_base_bdevs": 3, 00:08:31.226 "num_base_bdevs_discovered": 1, 00:08:31.226 "num_base_bdevs_operational": 3, 00:08:31.226 "base_bdevs_list": [ 00:08:31.226 { 00:08:31.226 "name": "BaseBdev1", 00:08:31.226 "uuid": "b207e378-7ecf-44d1-a4db-86440c0bf1dd", 00:08:31.226 "is_configured": true, 00:08:31.226 "data_offset": 2048, 00:08:31.226 "data_size": 63488 00:08:31.226 }, 00:08:31.226 { 00:08:31.226 "name": null, 00:08:31.226 "uuid": "cfdad1bc-199a-4440-a7c3-5f1f9afed68d", 00:08:31.226 "is_configured": false, 00:08:31.226 "data_offset": 0, 00:08:31.226 "data_size": 63488 00:08:31.226 }, 00:08:31.226 { 00:08:31.226 "name": null, 00:08:31.226 "uuid": "021ce311-35d1-4851-97fd-9113c344172e", 00:08:31.226 "is_configured": false, 00:08:31.226 "data_offset": 0, 00:08:31.226 "data_size": 63488 00:08:31.226 } 00:08:31.226 ] 00:08:31.226 }' 00:08:31.226 16:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.226 16:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.486 [2024-11-28 16:21:23.221787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.486 16:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.746 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.746 "name": "Existed_Raid", 00:08:31.746 "uuid": "0f5db87c-1123-46ea-9aa3-ad1daf7f0aa6", 00:08:31.746 "strip_size_kb": 64, 00:08:31.746 "state": "configuring", 00:08:31.746 "raid_level": "concat", 00:08:31.746 "superblock": true, 00:08:31.746 "num_base_bdevs": 3, 00:08:31.746 "num_base_bdevs_discovered": 2, 00:08:31.746 "num_base_bdevs_operational": 3, 00:08:31.746 "base_bdevs_list": [ 00:08:31.746 { 00:08:31.746 "name": "BaseBdev1", 00:08:31.746 "uuid": "b207e378-7ecf-44d1-a4db-86440c0bf1dd", 00:08:31.746 "is_configured": true, 00:08:31.746 "data_offset": 2048, 00:08:31.746 "data_size": 63488 00:08:31.746 }, 00:08:31.746 { 00:08:31.746 "name": null, 00:08:31.746 "uuid": "cfdad1bc-199a-4440-a7c3-5f1f9afed68d", 00:08:31.746 "is_configured": false, 00:08:31.746 "data_offset": 0, 00:08:31.746 "data_size": 63488 00:08:31.746 }, 00:08:31.746 { 00:08:31.746 "name": "BaseBdev3", 00:08:31.746 "uuid": "021ce311-35d1-4851-97fd-9113c344172e", 00:08:31.746 "is_configured": true, 00:08:31.746 "data_offset": 2048, 00:08:31.746 "data_size": 63488 00:08:31.746 } 00:08:31.746 ] 00:08:31.746 }' 00:08:31.746 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.746 16:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.006 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.006 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:32.006 16:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.006 16:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.006 16:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.006 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:32.006 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:32.006 16:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.006 16:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.006 [2024-11-28 16:21:23.708958] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:32.007 16:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.007 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:32.007 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.007 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.007 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.007 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.007 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.007 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.007 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.007 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.007 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.007 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.007 16:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.007 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.007 16:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.007 16:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.007 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.007 "name": "Existed_Raid", 00:08:32.007 "uuid": "0f5db87c-1123-46ea-9aa3-ad1daf7f0aa6", 00:08:32.007 "strip_size_kb": 64, 00:08:32.007 "state": "configuring", 00:08:32.007 "raid_level": "concat", 00:08:32.007 "superblock": true, 00:08:32.007 "num_base_bdevs": 3, 00:08:32.007 "num_base_bdevs_discovered": 1, 00:08:32.007 "num_base_bdevs_operational": 3, 00:08:32.007 "base_bdevs_list": [ 00:08:32.007 { 00:08:32.007 "name": null, 00:08:32.007 "uuid": "b207e378-7ecf-44d1-a4db-86440c0bf1dd", 00:08:32.007 "is_configured": false, 00:08:32.007 "data_offset": 0, 00:08:32.007 "data_size": 63488 00:08:32.007 }, 00:08:32.007 { 00:08:32.007 "name": null, 00:08:32.007 "uuid": "cfdad1bc-199a-4440-a7c3-5f1f9afed68d", 00:08:32.007 "is_configured": false, 00:08:32.007 "data_offset": 0, 00:08:32.007 "data_size": 63488 00:08:32.007 }, 00:08:32.007 { 00:08:32.007 "name": "BaseBdev3", 00:08:32.007 "uuid": "021ce311-35d1-4851-97fd-9113c344172e", 00:08:32.007 "is_configured": true, 00:08:32.007 "data_offset": 2048, 00:08:32.007 "data_size": 63488 00:08:32.007 } 00:08:32.007 ] 00:08:32.007 }' 00:08:32.007 16:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.007 16:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.575 [2024-11-28 16:21:24.150579] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.575 "name": "Existed_Raid", 00:08:32.575 "uuid": "0f5db87c-1123-46ea-9aa3-ad1daf7f0aa6", 00:08:32.575 "strip_size_kb": 64, 00:08:32.575 "state": "configuring", 00:08:32.575 "raid_level": "concat", 00:08:32.575 "superblock": true, 00:08:32.575 "num_base_bdevs": 3, 00:08:32.575 "num_base_bdevs_discovered": 2, 00:08:32.575 "num_base_bdevs_operational": 3, 00:08:32.575 "base_bdevs_list": [ 00:08:32.575 { 00:08:32.575 "name": null, 00:08:32.575 "uuid": "b207e378-7ecf-44d1-a4db-86440c0bf1dd", 00:08:32.575 "is_configured": false, 00:08:32.575 "data_offset": 0, 00:08:32.575 "data_size": 63488 00:08:32.575 }, 00:08:32.575 { 00:08:32.575 "name": "BaseBdev2", 00:08:32.575 "uuid": "cfdad1bc-199a-4440-a7c3-5f1f9afed68d", 00:08:32.575 "is_configured": true, 00:08:32.575 "data_offset": 2048, 00:08:32.575 "data_size": 63488 00:08:32.575 }, 00:08:32.575 { 00:08:32.575 "name": "BaseBdev3", 00:08:32.575 "uuid": "021ce311-35d1-4851-97fd-9113c344172e", 00:08:32.575 "is_configured": true, 00:08:32.575 "data_offset": 2048, 00:08:32.575 "data_size": 63488 00:08:32.575 } 00:08:32.575 ] 00:08:32.575 }' 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.575 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b207e378-7ecf-44d1-a4db-86440c0bf1dd 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.144 NewBaseBdev 00:08:33.144 [2024-11-28 16:21:24.712456] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:33.144 [2024-11-28 16:21:24.712622] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:33.144 [2024-11-28 16:21:24.712638] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:33.144 [2024-11-28 16:21:24.712898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:33.144 [2024-11-28 16:21:24.713019] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:33.144 [2024-11-28 16:21:24.713028] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:33.144 [2024-11-28 16:21:24.713142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.144 [ 00:08:33.144 { 00:08:33.144 "name": "NewBaseBdev", 00:08:33.144 "aliases": [ 00:08:33.144 "b207e378-7ecf-44d1-a4db-86440c0bf1dd" 00:08:33.144 ], 00:08:33.144 "product_name": "Malloc disk", 00:08:33.144 "block_size": 512, 00:08:33.144 "num_blocks": 65536, 00:08:33.144 "uuid": "b207e378-7ecf-44d1-a4db-86440c0bf1dd", 00:08:33.144 "assigned_rate_limits": { 00:08:33.144 "rw_ios_per_sec": 0, 00:08:33.144 "rw_mbytes_per_sec": 0, 00:08:33.144 "r_mbytes_per_sec": 0, 00:08:33.144 "w_mbytes_per_sec": 0 00:08:33.144 }, 00:08:33.144 "claimed": true, 00:08:33.144 "claim_type": "exclusive_write", 00:08:33.144 "zoned": false, 00:08:33.144 "supported_io_types": { 00:08:33.144 "read": true, 00:08:33.144 "write": true, 00:08:33.144 "unmap": true, 00:08:33.144 "flush": true, 00:08:33.144 "reset": true, 00:08:33.144 "nvme_admin": false, 00:08:33.144 "nvme_io": false, 00:08:33.144 "nvme_io_md": false, 00:08:33.144 "write_zeroes": true, 00:08:33.144 "zcopy": true, 00:08:33.144 "get_zone_info": false, 00:08:33.144 "zone_management": false, 00:08:33.144 "zone_append": false, 00:08:33.144 "compare": false, 00:08:33.144 "compare_and_write": false, 00:08:33.144 "abort": true, 00:08:33.144 "seek_hole": false, 00:08:33.144 "seek_data": false, 00:08:33.144 "copy": true, 00:08:33.144 "nvme_iov_md": false 00:08:33.144 }, 00:08:33.144 "memory_domains": [ 00:08:33.144 { 00:08:33.144 "dma_device_id": "system", 00:08:33.144 "dma_device_type": 1 00:08:33.144 }, 00:08:33.144 { 00:08:33.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.144 "dma_device_type": 2 00:08:33.144 } 00:08:33.144 ], 00:08:33.144 "driver_specific": {} 00:08:33.144 } 00:08:33.144 ] 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:33.144 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.145 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.145 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.145 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.145 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.145 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.145 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.145 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.145 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.145 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.145 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.145 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.145 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.145 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.145 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.145 "name": "Existed_Raid", 00:08:33.145 "uuid": "0f5db87c-1123-46ea-9aa3-ad1daf7f0aa6", 00:08:33.145 "strip_size_kb": 64, 00:08:33.145 "state": "online", 00:08:33.145 "raid_level": "concat", 00:08:33.145 "superblock": true, 00:08:33.145 "num_base_bdevs": 3, 00:08:33.145 "num_base_bdevs_discovered": 3, 00:08:33.145 "num_base_bdevs_operational": 3, 00:08:33.145 "base_bdevs_list": [ 00:08:33.145 { 00:08:33.145 "name": "NewBaseBdev", 00:08:33.145 "uuid": "b207e378-7ecf-44d1-a4db-86440c0bf1dd", 00:08:33.145 "is_configured": true, 00:08:33.145 "data_offset": 2048, 00:08:33.145 "data_size": 63488 00:08:33.145 }, 00:08:33.145 { 00:08:33.145 "name": "BaseBdev2", 00:08:33.145 "uuid": "cfdad1bc-199a-4440-a7c3-5f1f9afed68d", 00:08:33.145 "is_configured": true, 00:08:33.145 "data_offset": 2048, 00:08:33.145 "data_size": 63488 00:08:33.145 }, 00:08:33.145 { 00:08:33.145 "name": "BaseBdev3", 00:08:33.145 "uuid": "021ce311-35d1-4851-97fd-9113c344172e", 00:08:33.145 "is_configured": true, 00:08:33.145 "data_offset": 2048, 00:08:33.145 "data_size": 63488 00:08:33.145 } 00:08:33.145 ] 00:08:33.145 }' 00:08:33.145 16:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.145 16:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.403 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:33.403 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:33.403 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:33.403 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:33.403 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:33.403 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:33.403 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:33.403 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:33.403 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.403 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.403 [2024-11-28 16:21:25.168006] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.662 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.662 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:33.662 "name": "Existed_Raid", 00:08:33.662 "aliases": [ 00:08:33.662 "0f5db87c-1123-46ea-9aa3-ad1daf7f0aa6" 00:08:33.662 ], 00:08:33.662 "product_name": "Raid Volume", 00:08:33.662 "block_size": 512, 00:08:33.662 "num_blocks": 190464, 00:08:33.662 "uuid": "0f5db87c-1123-46ea-9aa3-ad1daf7f0aa6", 00:08:33.662 "assigned_rate_limits": { 00:08:33.662 "rw_ios_per_sec": 0, 00:08:33.662 "rw_mbytes_per_sec": 0, 00:08:33.662 "r_mbytes_per_sec": 0, 00:08:33.662 "w_mbytes_per_sec": 0 00:08:33.662 }, 00:08:33.662 "claimed": false, 00:08:33.662 "zoned": false, 00:08:33.662 "supported_io_types": { 00:08:33.662 "read": true, 00:08:33.662 "write": true, 00:08:33.662 "unmap": true, 00:08:33.662 "flush": true, 00:08:33.662 "reset": true, 00:08:33.662 "nvme_admin": false, 00:08:33.662 "nvme_io": false, 00:08:33.662 "nvme_io_md": false, 00:08:33.662 "write_zeroes": true, 00:08:33.662 "zcopy": false, 00:08:33.662 "get_zone_info": false, 00:08:33.662 "zone_management": false, 00:08:33.662 "zone_append": false, 00:08:33.662 "compare": false, 00:08:33.662 "compare_and_write": false, 00:08:33.662 "abort": false, 00:08:33.662 "seek_hole": false, 00:08:33.662 "seek_data": false, 00:08:33.662 "copy": false, 00:08:33.662 "nvme_iov_md": false 00:08:33.662 }, 00:08:33.662 "memory_domains": [ 00:08:33.662 { 00:08:33.662 "dma_device_id": "system", 00:08:33.662 "dma_device_type": 1 00:08:33.662 }, 00:08:33.662 { 00:08:33.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.662 "dma_device_type": 2 00:08:33.662 }, 00:08:33.662 { 00:08:33.662 "dma_device_id": "system", 00:08:33.662 "dma_device_type": 1 00:08:33.662 }, 00:08:33.662 { 00:08:33.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.662 "dma_device_type": 2 00:08:33.662 }, 00:08:33.662 { 00:08:33.662 "dma_device_id": "system", 00:08:33.662 "dma_device_type": 1 00:08:33.662 }, 00:08:33.662 { 00:08:33.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.662 "dma_device_type": 2 00:08:33.662 } 00:08:33.662 ], 00:08:33.662 "driver_specific": { 00:08:33.662 "raid": { 00:08:33.662 "uuid": "0f5db87c-1123-46ea-9aa3-ad1daf7f0aa6", 00:08:33.662 "strip_size_kb": 64, 00:08:33.662 "state": "online", 00:08:33.662 "raid_level": "concat", 00:08:33.662 "superblock": true, 00:08:33.662 "num_base_bdevs": 3, 00:08:33.662 "num_base_bdevs_discovered": 3, 00:08:33.662 "num_base_bdevs_operational": 3, 00:08:33.662 "base_bdevs_list": [ 00:08:33.662 { 00:08:33.662 "name": "NewBaseBdev", 00:08:33.662 "uuid": "b207e378-7ecf-44d1-a4db-86440c0bf1dd", 00:08:33.662 "is_configured": true, 00:08:33.662 "data_offset": 2048, 00:08:33.662 "data_size": 63488 00:08:33.662 }, 00:08:33.662 { 00:08:33.662 "name": "BaseBdev2", 00:08:33.662 "uuid": "cfdad1bc-199a-4440-a7c3-5f1f9afed68d", 00:08:33.662 "is_configured": true, 00:08:33.662 "data_offset": 2048, 00:08:33.662 "data_size": 63488 00:08:33.662 }, 00:08:33.662 { 00:08:33.662 "name": "BaseBdev3", 00:08:33.662 "uuid": "021ce311-35d1-4851-97fd-9113c344172e", 00:08:33.662 "is_configured": true, 00:08:33.662 "data_offset": 2048, 00:08:33.662 "data_size": 63488 00:08:33.662 } 00:08:33.662 ] 00:08:33.662 } 00:08:33.662 } 00:08:33.662 }' 00:08:33.662 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:33.662 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:33.662 BaseBdev2 00:08:33.662 BaseBdev3' 00:08:33.662 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.662 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:33.662 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.662 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:33.662 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.662 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.662 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.662 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.662 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.663 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.663 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.663 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:33.663 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.663 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.663 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.663 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.663 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.663 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.663 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.663 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:33.663 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.663 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.663 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.663 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.922 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.922 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.922 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:33.922 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.922 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.922 [2024-11-28 16:21:25.447226] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:33.922 [2024-11-28 16:21:25.447287] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:33.922 [2024-11-28 16:21:25.447363] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.922 [2024-11-28 16:21:25.447416] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.922 [2024-11-28 16:21:25.447427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:33.922 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.922 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77368 00:08:33.922 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77368 ']' 00:08:33.922 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 77368 00:08:33.922 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:33.922 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:33.922 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77368 00:08:33.922 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:33.922 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:33.922 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77368' 00:08:33.922 killing process with pid 77368 00:08:33.922 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 77368 00:08:33.922 [2024-11-28 16:21:25.496248] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:33.922 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 77368 00:08:33.922 [2024-11-28 16:21:25.527264] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:34.181 16:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:34.181 ************************************ 00:08:34.181 END TEST raid_state_function_test_sb 00:08:34.181 ************************************ 00:08:34.181 00:08:34.181 real 0m8.633s 00:08:34.181 user 0m14.702s 00:08:34.181 sys 0m1.805s 00:08:34.181 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.181 16:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.181 16:21:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:34.181 16:21:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:34.181 16:21:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.181 16:21:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:34.181 ************************************ 00:08:34.181 START TEST raid_superblock_test 00:08:34.181 ************************************ 00:08:34.181 16:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:08:34.181 16:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:34.181 16:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:34.181 16:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:34.181 16:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:34.181 16:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:34.181 16:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:34.181 16:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:34.181 16:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:34.182 16:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:34.182 16:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:34.182 16:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:34.182 16:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:34.182 16:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:34.182 16:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:34.182 16:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:34.182 16:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:34.182 16:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=77972 00:08:34.182 16:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:34.182 16:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 77972 00:08:34.182 16:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 77972 ']' 00:08:34.182 16:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.182 16:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:34.182 16:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.182 16:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:34.182 16:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.182 [2024-11-28 16:21:25.929952] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:34.182 [2024-11-28 16:21:25.930176] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77972 ] 00:08:34.441 [2024-11-28 16:21:26.088085] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.441 [2024-11-28 16:21:26.131581] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.441 [2024-11-28 16:21:26.173235] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.441 [2024-11-28 16:21:26.173354] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.012 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:35.012 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:35.012 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:35.012 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:35.012 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:35.012 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:35.012 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:35.012 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:35.012 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:35.012 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:35.012 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:35.012 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.012 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.012 malloc1 00:08:35.012 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.012 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:35.012 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.012 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.272 [2024-11-28 16:21:26.787092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:35.272 [2024-11-28 16:21:26.787205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.272 [2024-11-28 16:21:26.787250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:35.272 [2024-11-28 16:21:26.787296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.272 [2024-11-28 16:21:26.789420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.272 [2024-11-28 16:21:26.789499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:35.272 pt1 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.272 malloc2 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.272 [2024-11-28 16:21:26.823170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:35.272 [2024-11-28 16:21:26.823279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.272 [2024-11-28 16:21:26.823319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:35.272 [2024-11-28 16:21:26.823355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.272 [2024-11-28 16:21:26.825857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.272 [2024-11-28 16:21:26.825936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:35.272 pt2 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.272 malloc3 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.272 [2024-11-28 16:21:26.855569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:35.272 [2024-11-28 16:21:26.855656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.272 [2024-11-28 16:21:26.855696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:35.272 [2024-11-28 16:21:26.855727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.272 [2024-11-28 16:21:26.857732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.272 [2024-11-28 16:21:26.857802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:35.272 pt3 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.272 [2024-11-28 16:21:26.867598] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:35.272 [2024-11-28 16:21:26.869336] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:35.272 [2024-11-28 16:21:26.869394] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:35.272 [2024-11-28 16:21:26.869528] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:35.272 [2024-11-28 16:21:26.869539] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:35.272 [2024-11-28 16:21:26.869759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:35.272 [2024-11-28 16:21:26.869883] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:35.272 [2024-11-28 16:21:26.869897] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:35.272 [2024-11-28 16:21:26.870023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:35.272 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.273 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.273 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.273 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.273 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.273 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.273 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.273 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.273 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.273 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:35.273 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.273 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.273 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.273 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.273 "name": "raid_bdev1", 00:08:35.273 "uuid": "c9d5b246-8c4f-4d08-b9f6-f96a2ea1e8f7", 00:08:35.273 "strip_size_kb": 64, 00:08:35.273 "state": "online", 00:08:35.273 "raid_level": "concat", 00:08:35.273 "superblock": true, 00:08:35.273 "num_base_bdevs": 3, 00:08:35.273 "num_base_bdevs_discovered": 3, 00:08:35.273 "num_base_bdevs_operational": 3, 00:08:35.273 "base_bdevs_list": [ 00:08:35.273 { 00:08:35.273 "name": "pt1", 00:08:35.273 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:35.273 "is_configured": true, 00:08:35.273 "data_offset": 2048, 00:08:35.273 "data_size": 63488 00:08:35.273 }, 00:08:35.273 { 00:08:35.273 "name": "pt2", 00:08:35.273 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:35.273 "is_configured": true, 00:08:35.273 "data_offset": 2048, 00:08:35.273 "data_size": 63488 00:08:35.273 }, 00:08:35.273 { 00:08:35.273 "name": "pt3", 00:08:35.273 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:35.273 "is_configured": true, 00:08:35.273 "data_offset": 2048, 00:08:35.273 "data_size": 63488 00:08:35.273 } 00:08:35.273 ] 00:08:35.273 }' 00:08:35.273 16:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.273 16:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:35.845 [2024-11-28 16:21:27.311151] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:35.845 "name": "raid_bdev1", 00:08:35.845 "aliases": [ 00:08:35.845 "c9d5b246-8c4f-4d08-b9f6-f96a2ea1e8f7" 00:08:35.845 ], 00:08:35.845 "product_name": "Raid Volume", 00:08:35.845 "block_size": 512, 00:08:35.845 "num_blocks": 190464, 00:08:35.845 "uuid": "c9d5b246-8c4f-4d08-b9f6-f96a2ea1e8f7", 00:08:35.845 "assigned_rate_limits": { 00:08:35.845 "rw_ios_per_sec": 0, 00:08:35.845 "rw_mbytes_per_sec": 0, 00:08:35.845 "r_mbytes_per_sec": 0, 00:08:35.845 "w_mbytes_per_sec": 0 00:08:35.845 }, 00:08:35.845 "claimed": false, 00:08:35.845 "zoned": false, 00:08:35.845 "supported_io_types": { 00:08:35.845 "read": true, 00:08:35.845 "write": true, 00:08:35.845 "unmap": true, 00:08:35.845 "flush": true, 00:08:35.845 "reset": true, 00:08:35.845 "nvme_admin": false, 00:08:35.845 "nvme_io": false, 00:08:35.845 "nvme_io_md": false, 00:08:35.845 "write_zeroes": true, 00:08:35.845 "zcopy": false, 00:08:35.845 "get_zone_info": false, 00:08:35.845 "zone_management": false, 00:08:35.845 "zone_append": false, 00:08:35.845 "compare": false, 00:08:35.845 "compare_and_write": false, 00:08:35.845 "abort": false, 00:08:35.845 "seek_hole": false, 00:08:35.845 "seek_data": false, 00:08:35.845 "copy": false, 00:08:35.845 "nvme_iov_md": false 00:08:35.845 }, 00:08:35.845 "memory_domains": [ 00:08:35.845 { 00:08:35.845 "dma_device_id": "system", 00:08:35.845 "dma_device_type": 1 00:08:35.845 }, 00:08:35.845 { 00:08:35.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.845 "dma_device_type": 2 00:08:35.845 }, 00:08:35.845 { 00:08:35.845 "dma_device_id": "system", 00:08:35.845 "dma_device_type": 1 00:08:35.845 }, 00:08:35.845 { 00:08:35.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.845 "dma_device_type": 2 00:08:35.845 }, 00:08:35.845 { 00:08:35.845 "dma_device_id": "system", 00:08:35.845 "dma_device_type": 1 00:08:35.845 }, 00:08:35.845 { 00:08:35.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.845 "dma_device_type": 2 00:08:35.845 } 00:08:35.845 ], 00:08:35.845 "driver_specific": { 00:08:35.845 "raid": { 00:08:35.845 "uuid": "c9d5b246-8c4f-4d08-b9f6-f96a2ea1e8f7", 00:08:35.845 "strip_size_kb": 64, 00:08:35.845 "state": "online", 00:08:35.845 "raid_level": "concat", 00:08:35.845 "superblock": true, 00:08:35.845 "num_base_bdevs": 3, 00:08:35.845 "num_base_bdevs_discovered": 3, 00:08:35.845 "num_base_bdevs_operational": 3, 00:08:35.845 "base_bdevs_list": [ 00:08:35.845 { 00:08:35.845 "name": "pt1", 00:08:35.845 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:35.845 "is_configured": true, 00:08:35.845 "data_offset": 2048, 00:08:35.845 "data_size": 63488 00:08:35.845 }, 00:08:35.845 { 00:08:35.845 "name": "pt2", 00:08:35.845 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:35.845 "is_configured": true, 00:08:35.845 "data_offset": 2048, 00:08:35.845 "data_size": 63488 00:08:35.845 }, 00:08:35.845 { 00:08:35.845 "name": "pt3", 00:08:35.845 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:35.845 "is_configured": true, 00:08:35.845 "data_offset": 2048, 00:08:35.845 "data_size": 63488 00:08:35.845 } 00:08:35.845 ] 00:08:35.845 } 00:08:35.845 } 00:08:35.845 }' 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:35.845 pt2 00:08:35.845 pt3' 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:35.845 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.846 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.846 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:35.846 [2024-11-28 16:21:27.598541] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c9d5b246-8c4f-4d08-b9f6-f96a2ea1e8f7 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c9d5b246-8c4f-4d08-b9f6-f96a2ea1e8f7 ']' 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.107 [2024-11-28 16:21:27.642206] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:36.107 [2024-11-28 16:21:27.642269] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.107 [2024-11-28 16:21:27.642341] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.107 [2024-11-28 16:21:27.642397] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.107 [2024-11-28 16:21:27.642411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:36.107 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.108 [2024-11-28 16:21:27.782013] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:36.108 [2024-11-28 16:21:27.783884] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:36.108 [2024-11-28 16:21:27.783972] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:36.108 [2024-11-28 16:21:27.784038] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:36.108 [2024-11-28 16:21:27.784120] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:36.108 [2024-11-28 16:21:27.784163] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:36.108 [2024-11-28 16:21:27.784237] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:36.108 [2024-11-28 16:21:27.784270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:36.108 request: 00:08:36.108 { 00:08:36.108 "name": "raid_bdev1", 00:08:36.108 "raid_level": "concat", 00:08:36.108 "base_bdevs": [ 00:08:36.108 "malloc1", 00:08:36.108 "malloc2", 00:08:36.108 "malloc3" 00:08:36.108 ], 00:08:36.108 "strip_size_kb": 64, 00:08:36.108 "superblock": false, 00:08:36.108 "method": "bdev_raid_create", 00:08:36.108 "req_id": 1 00:08:36.108 } 00:08:36.108 Got JSON-RPC error response 00:08:36.108 response: 00:08:36.108 { 00:08:36.108 "code": -17, 00:08:36.108 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:36.108 } 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.108 [2024-11-28 16:21:27.837882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:36.108 [2024-11-28 16:21:27.837964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.108 [2024-11-28 16:21:27.838009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:36.108 [2024-11-28 16:21:27.838038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.108 [2024-11-28 16:21:27.840051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.108 [2024-11-28 16:21:27.840123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:36.108 [2024-11-28 16:21:27.840202] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:36.108 [2024-11-28 16:21:27.840257] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:36.108 pt1 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.108 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.368 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.368 "name": "raid_bdev1", 00:08:36.368 "uuid": "c9d5b246-8c4f-4d08-b9f6-f96a2ea1e8f7", 00:08:36.368 "strip_size_kb": 64, 00:08:36.368 "state": "configuring", 00:08:36.368 "raid_level": "concat", 00:08:36.368 "superblock": true, 00:08:36.368 "num_base_bdevs": 3, 00:08:36.368 "num_base_bdevs_discovered": 1, 00:08:36.368 "num_base_bdevs_operational": 3, 00:08:36.368 "base_bdevs_list": [ 00:08:36.368 { 00:08:36.368 "name": "pt1", 00:08:36.368 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:36.368 "is_configured": true, 00:08:36.368 "data_offset": 2048, 00:08:36.368 "data_size": 63488 00:08:36.368 }, 00:08:36.368 { 00:08:36.368 "name": null, 00:08:36.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:36.368 "is_configured": false, 00:08:36.368 "data_offset": 2048, 00:08:36.368 "data_size": 63488 00:08:36.368 }, 00:08:36.368 { 00:08:36.368 "name": null, 00:08:36.368 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:36.368 "is_configured": false, 00:08:36.368 "data_offset": 2048, 00:08:36.368 "data_size": 63488 00:08:36.368 } 00:08:36.368 ] 00:08:36.368 }' 00:08:36.368 16:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.368 16:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.628 [2024-11-28 16:21:28.273140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:36.628 [2024-11-28 16:21:28.273236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.628 [2024-11-28 16:21:28.273285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:36.628 [2024-11-28 16:21:28.273316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.628 [2024-11-28 16:21:28.273678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.628 [2024-11-28 16:21:28.273734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:36.628 [2024-11-28 16:21:28.273821] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:36.628 [2024-11-28 16:21:28.273881] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:36.628 pt2 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.628 [2024-11-28 16:21:28.281137] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.628 "name": "raid_bdev1", 00:08:36.628 "uuid": "c9d5b246-8c4f-4d08-b9f6-f96a2ea1e8f7", 00:08:36.628 "strip_size_kb": 64, 00:08:36.628 "state": "configuring", 00:08:36.628 "raid_level": "concat", 00:08:36.628 "superblock": true, 00:08:36.628 "num_base_bdevs": 3, 00:08:36.628 "num_base_bdevs_discovered": 1, 00:08:36.628 "num_base_bdevs_operational": 3, 00:08:36.628 "base_bdevs_list": [ 00:08:36.628 { 00:08:36.628 "name": "pt1", 00:08:36.628 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:36.628 "is_configured": true, 00:08:36.628 "data_offset": 2048, 00:08:36.628 "data_size": 63488 00:08:36.628 }, 00:08:36.628 { 00:08:36.628 "name": null, 00:08:36.628 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:36.628 "is_configured": false, 00:08:36.628 "data_offset": 0, 00:08:36.628 "data_size": 63488 00:08:36.628 }, 00:08:36.628 { 00:08:36.628 "name": null, 00:08:36.628 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:36.628 "is_configured": false, 00:08:36.628 "data_offset": 2048, 00:08:36.628 "data_size": 63488 00:08:36.628 } 00:08:36.628 ] 00:08:36.628 }' 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.628 16:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.198 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:37.198 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:37.198 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:37.198 16:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.198 16:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.198 [2024-11-28 16:21:28.728373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:37.198 [2024-11-28 16:21:28.728494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.198 [2024-11-28 16:21:28.728529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:37.198 [2024-11-28 16:21:28.728558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.198 [2024-11-28 16:21:28.728980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.198 [2024-11-28 16:21:28.729039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:37.198 [2024-11-28 16:21:28.729136] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:37.198 [2024-11-28 16:21:28.729183] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:37.198 pt2 00:08:37.198 16:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.198 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:37.198 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:37.198 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:37.198 16:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.198 16:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.198 [2024-11-28 16:21:28.740323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:37.198 [2024-11-28 16:21:28.740404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.198 [2024-11-28 16:21:28.740454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:37.198 [2024-11-28 16:21:28.740482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.198 [2024-11-28 16:21:28.740816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.199 [2024-11-28 16:21:28.740907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:37.199 [2024-11-28 16:21:28.740988] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:37.199 [2024-11-28 16:21:28.741033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:37.199 [2024-11-28 16:21:28.741153] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:37.199 [2024-11-28 16:21:28.741194] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:37.199 [2024-11-28 16:21:28.741430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:37.199 [2024-11-28 16:21:28.741564] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:37.199 [2024-11-28 16:21:28.741603] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:37.199 [2024-11-28 16:21:28.741730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.199 pt3 00:08:37.199 16:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.199 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:37.199 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:37.199 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:37.199 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.199 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.199 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.199 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.199 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.199 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.199 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.199 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.199 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.199 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.199 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.199 16:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.199 16:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.199 16:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.199 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.199 "name": "raid_bdev1", 00:08:37.199 "uuid": "c9d5b246-8c4f-4d08-b9f6-f96a2ea1e8f7", 00:08:37.199 "strip_size_kb": 64, 00:08:37.199 "state": "online", 00:08:37.199 "raid_level": "concat", 00:08:37.199 "superblock": true, 00:08:37.199 "num_base_bdevs": 3, 00:08:37.199 "num_base_bdevs_discovered": 3, 00:08:37.199 "num_base_bdevs_operational": 3, 00:08:37.199 "base_bdevs_list": [ 00:08:37.199 { 00:08:37.199 "name": "pt1", 00:08:37.199 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:37.199 "is_configured": true, 00:08:37.199 "data_offset": 2048, 00:08:37.199 "data_size": 63488 00:08:37.199 }, 00:08:37.199 { 00:08:37.199 "name": "pt2", 00:08:37.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:37.199 "is_configured": true, 00:08:37.199 "data_offset": 2048, 00:08:37.199 "data_size": 63488 00:08:37.199 }, 00:08:37.199 { 00:08:37.199 "name": "pt3", 00:08:37.199 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:37.199 "is_configured": true, 00:08:37.199 "data_offset": 2048, 00:08:37.199 "data_size": 63488 00:08:37.199 } 00:08:37.199 ] 00:08:37.199 }' 00:08:37.199 16:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.199 16:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.540 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:37.540 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:37.540 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:37.540 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:37.540 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:37.540 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:37.540 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:37.540 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.540 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.540 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:37.540 [2024-11-28 16:21:29.195887] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:37.540 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.540 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:37.540 "name": "raid_bdev1", 00:08:37.540 "aliases": [ 00:08:37.540 "c9d5b246-8c4f-4d08-b9f6-f96a2ea1e8f7" 00:08:37.540 ], 00:08:37.540 "product_name": "Raid Volume", 00:08:37.540 "block_size": 512, 00:08:37.540 "num_blocks": 190464, 00:08:37.540 "uuid": "c9d5b246-8c4f-4d08-b9f6-f96a2ea1e8f7", 00:08:37.540 "assigned_rate_limits": { 00:08:37.540 "rw_ios_per_sec": 0, 00:08:37.540 "rw_mbytes_per_sec": 0, 00:08:37.540 "r_mbytes_per_sec": 0, 00:08:37.540 "w_mbytes_per_sec": 0 00:08:37.540 }, 00:08:37.540 "claimed": false, 00:08:37.540 "zoned": false, 00:08:37.540 "supported_io_types": { 00:08:37.540 "read": true, 00:08:37.540 "write": true, 00:08:37.540 "unmap": true, 00:08:37.540 "flush": true, 00:08:37.540 "reset": true, 00:08:37.540 "nvme_admin": false, 00:08:37.540 "nvme_io": false, 00:08:37.540 "nvme_io_md": false, 00:08:37.540 "write_zeroes": true, 00:08:37.540 "zcopy": false, 00:08:37.540 "get_zone_info": false, 00:08:37.540 "zone_management": false, 00:08:37.540 "zone_append": false, 00:08:37.541 "compare": false, 00:08:37.541 "compare_and_write": false, 00:08:37.541 "abort": false, 00:08:37.541 "seek_hole": false, 00:08:37.541 "seek_data": false, 00:08:37.541 "copy": false, 00:08:37.541 "nvme_iov_md": false 00:08:37.541 }, 00:08:37.541 "memory_domains": [ 00:08:37.541 { 00:08:37.541 "dma_device_id": "system", 00:08:37.541 "dma_device_type": 1 00:08:37.541 }, 00:08:37.541 { 00:08:37.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.541 "dma_device_type": 2 00:08:37.541 }, 00:08:37.541 { 00:08:37.541 "dma_device_id": "system", 00:08:37.541 "dma_device_type": 1 00:08:37.541 }, 00:08:37.541 { 00:08:37.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.541 "dma_device_type": 2 00:08:37.541 }, 00:08:37.541 { 00:08:37.541 "dma_device_id": "system", 00:08:37.541 "dma_device_type": 1 00:08:37.541 }, 00:08:37.541 { 00:08:37.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.541 "dma_device_type": 2 00:08:37.541 } 00:08:37.541 ], 00:08:37.541 "driver_specific": { 00:08:37.541 "raid": { 00:08:37.541 "uuid": "c9d5b246-8c4f-4d08-b9f6-f96a2ea1e8f7", 00:08:37.541 "strip_size_kb": 64, 00:08:37.541 "state": "online", 00:08:37.541 "raid_level": "concat", 00:08:37.541 "superblock": true, 00:08:37.541 "num_base_bdevs": 3, 00:08:37.541 "num_base_bdevs_discovered": 3, 00:08:37.541 "num_base_bdevs_operational": 3, 00:08:37.541 "base_bdevs_list": [ 00:08:37.541 { 00:08:37.541 "name": "pt1", 00:08:37.541 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:37.541 "is_configured": true, 00:08:37.541 "data_offset": 2048, 00:08:37.541 "data_size": 63488 00:08:37.541 }, 00:08:37.541 { 00:08:37.541 "name": "pt2", 00:08:37.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:37.541 "is_configured": true, 00:08:37.541 "data_offset": 2048, 00:08:37.541 "data_size": 63488 00:08:37.541 }, 00:08:37.541 { 00:08:37.541 "name": "pt3", 00:08:37.541 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:37.541 "is_configured": true, 00:08:37.541 "data_offset": 2048, 00:08:37.541 "data_size": 63488 00:08:37.541 } 00:08:37.541 ] 00:08:37.541 } 00:08:37.541 } 00:08:37.541 }' 00:08:37.541 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:37.541 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:37.541 pt2 00:08:37.541 pt3' 00:08:37.541 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:37.817 [2024-11-28 16:21:29.471340] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c9d5b246-8c4f-4d08-b9f6-f96a2ea1e8f7 '!=' c9d5b246-8c4f-4d08-b9f6-f96a2ea1e8f7 ']' 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:37.817 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 77972 00:08:37.818 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 77972 ']' 00:08:37.818 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 77972 00:08:37.818 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:37.818 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:37.818 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77972 00:08:37.818 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:37.818 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:37.818 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77972' 00:08:37.818 killing process with pid 77972 00:08:37.818 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 77972 00:08:37.818 [2024-11-28 16:21:29.557823] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:37.818 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 77972 00:08:37.818 [2024-11-28 16:21:29.557987] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.818 [2024-11-28 16:21:29.558079] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:37.818 [2024-11-28 16:21:29.558126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:38.077 [2024-11-28 16:21:29.591861] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:38.077 16:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:38.077 00:08:38.077 real 0m3.995s 00:08:38.077 user 0m6.278s 00:08:38.077 sys 0m0.841s 00:08:38.077 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.077 16:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.077 ************************************ 00:08:38.077 END TEST raid_superblock_test 00:08:38.077 ************************************ 00:08:38.337 16:21:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:08:38.337 16:21:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:38.337 16:21:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.337 16:21:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:38.337 ************************************ 00:08:38.337 START TEST raid_read_error_test 00:08:38.337 ************************************ 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.OWw0Ooasci 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78215 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78215 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 78215 ']' 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.337 16:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.337 [2024-11-28 16:21:30.017940] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:38.337 [2024-11-28 16:21:30.018130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78215 ] 00:08:38.596 [2024-11-28 16:21:30.178034] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.596 [2024-11-28 16:21:30.222873] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.596 [2024-11-28 16:21:30.265064] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.596 [2024-11-28 16:21:30.265181] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.164 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.164 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:39.164 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:39.164 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:39.164 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.164 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.164 BaseBdev1_malloc 00:08:39.164 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.164 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:39.164 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.164 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.164 true 00:08:39.164 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.165 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:39.165 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.165 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.165 [2024-11-28 16:21:30.883236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:39.165 [2024-11-28 16:21:30.883383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.165 [2024-11-28 16:21:30.883437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:39.165 [2024-11-28 16:21:30.883466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.165 [2024-11-28 16:21:30.885553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.165 [2024-11-28 16:21:30.885625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:39.165 BaseBdev1 00:08:39.165 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.165 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:39.165 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:39.165 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.165 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.165 BaseBdev2_malloc 00:08:39.165 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.165 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:39.165 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.165 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.165 true 00:08:39.165 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.165 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:39.165 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.165 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.165 [2024-11-28 16:21:30.933450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:39.165 [2024-11-28 16:21:30.933563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.165 [2024-11-28 16:21:30.933584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:39.165 [2024-11-28 16:21:30.933592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.424 [2024-11-28 16:21:30.935579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.424 [2024-11-28 16:21:30.935615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:39.424 BaseBdev2 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.424 BaseBdev3_malloc 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.424 true 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.424 [2024-11-28 16:21:30.974303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:39.424 [2024-11-28 16:21:30.974354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.424 [2024-11-28 16:21:30.974388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:39.424 [2024-11-28 16:21:30.974396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.424 [2024-11-28 16:21:30.976375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.424 [2024-11-28 16:21:30.976482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:39.424 BaseBdev3 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.424 [2024-11-28 16:21:30.986349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.424 [2024-11-28 16:21:30.988117] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:39.424 [2024-11-28 16:21:30.988193] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:39.424 [2024-11-28 16:21:30.988358] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:39.424 [2024-11-28 16:21:30.988371] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:39.424 [2024-11-28 16:21:30.988598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:39.424 [2024-11-28 16:21:30.988714] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:39.424 [2024-11-28 16:21:30.988728] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:39.424 [2024-11-28 16:21:30.988856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.424 16:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.424 16:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.424 16:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.424 "name": "raid_bdev1", 00:08:39.424 "uuid": "acdacc26-30a7-47bf-92b4-de1793259808", 00:08:39.424 "strip_size_kb": 64, 00:08:39.424 "state": "online", 00:08:39.424 "raid_level": "concat", 00:08:39.424 "superblock": true, 00:08:39.424 "num_base_bdevs": 3, 00:08:39.424 "num_base_bdevs_discovered": 3, 00:08:39.424 "num_base_bdevs_operational": 3, 00:08:39.424 "base_bdevs_list": [ 00:08:39.424 { 00:08:39.424 "name": "BaseBdev1", 00:08:39.424 "uuid": "bce05430-63b7-5db4-8a8d-e1427e7067d0", 00:08:39.424 "is_configured": true, 00:08:39.424 "data_offset": 2048, 00:08:39.424 "data_size": 63488 00:08:39.424 }, 00:08:39.424 { 00:08:39.424 "name": "BaseBdev2", 00:08:39.424 "uuid": "10ed835a-0315-5384-ad54-294fc2b8eca0", 00:08:39.424 "is_configured": true, 00:08:39.424 "data_offset": 2048, 00:08:39.424 "data_size": 63488 00:08:39.424 }, 00:08:39.424 { 00:08:39.424 "name": "BaseBdev3", 00:08:39.424 "uuid": "86c7d289-85ec-5598-bdc3-30311e94e3b1", 00:08:39.424 "is_configured": true, 00:08:39.424 "data_offset": 2048, 00:08:39.424 "data_size": 63488 00:08:39.424 } 00:08:39.424 ] 00:08:39.424 }' 00:08:39.424 16:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.424 16:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.993 16:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:39.993 16:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:39.993 [2024-11-28 16:21:31.549734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.932 "name": "raid_bdev1", 00:08:40.932 "uuid": "acdacc26-30a7-47bf-92b4-de1793259808", 00:08:40.932 "strip_size_kb": 64, 00:08:40.932 "state": "online", 00:08:40.932 "raid_level": "concat", 00:08:40.932 "superblock": true, 00:08:40.932 "num_base_bdevs": 3, 00:08:40.932 "num_base_bdevs_discovered": 3, 00:08:40.932 "num_base_bdevs_operational": 3, 00:08:40.932 "base_bdevs_list": [ 00:08:40.932 { 00:08:40.932 "name": "BaseBdev1", 00:08:40.932 "uuid": "bce05430-63b7-5db4-8a8d-e1427e7067d0", 00:08:40.932 "is_configured": true, 00:08:40.932 "data_offset": 2048, 00:08:40.932 "data_size": 63488 00:08:40.932 }, 00:08:40.932 { 00:08:40.932 "name": "BaseBdev2", 00:08:40.932 "uuid": "10ed835a-0315-5384-ad54-294fc2b8eca0", 00:08:40.932 "is_configured": true, 00:08:40.932 "data_offset": 2048, 00:08:40.932 "data_size": 63488 00:08:40.932 }, 00:08:40.932 { 00:08:40.932 "name": "BaseBdev3", 00:08:40.932 "uuid": "86c7d289-85ec-5598-bdc3-30311e94e3b1", 00:08:40.932 "is_configured": true, 00:08:40.932 "data_offset": 2048, 00:08:40.932 "data_size": 63488 00:08:40.932 } 00:08:40.932 ] 00:08:40.932 }' 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.932 16:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.192 16:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:41.192 16:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.192 16:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.192 [2024-11-28 16:21:32.892933] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:41.192 [2024-11-28 16:21:32.893059] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.192 [2024-11-28 16:21:32.895622] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.192 [2024-11-28 16:21:32.895671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.192 [2024-11-28 16:21:32.895731] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.192 [2024-11-28 16:21:32.895743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:41.192 { 00:08:41.192 "results": [ 00:08:41.192 { 00:08:41.192 "job": "raid_bdev1", 00:08:41.192 "core_mask": "0x1", 00:08:41.192 "workload": "randrw", 00:08:41.192 "percentage": 50, 00:08:41.192 "status": "finished", 00:08:41.192 "queue_depth": 1, 00:08:41.192 "io_size": 131072, 00:08:41.192 "runtime": 1.344129, 00:08:41.192 "iops": 17456.65780591, 00:08:41.192 "mibps": 2182.08222573875, 00:08:41.192 "io_failed": 1, 00:08:41.192 "io_timeout": 0, 00:08:41.192 "avg_latency_us": 79.38126709202686, 00:08:41.192 "min_latency_us": 24.482096069868994, 00:08:41.192 "max_latency_us": 1337.907423580786 00:08:41.192 } 00:08:41.192 ], 00:08:41.192 "core_count": 1 00:08:41.192 } 00:08:41.192 16:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.192 16:21:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78215 00:08:41.192 16:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 78215 ']' 00:08:41.192 16:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 78215 00:08:41.192 16:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:41.192 16:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:41.192 16:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78215 00:08:41.192 16:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:41.192 16:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:41.192 16:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78215' 00:08:41.192 killing process with pid 78215 00:08:41.192 16:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 78215 00:08:41.192 [2024-11-28 16:21:32.941639] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:41.192 16:21:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 78215 00:08:41.452 [2024-11-28 16:21:32.967974] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:41.452 16:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.OWw0Ooasci 00:08:41.452 16:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:41.452 16:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:41.452 16:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:41.452 16:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:41.452 ************************************ 00:08:41.452 END TEST raid_read_error_test 00:08:41.452 ************************************ 00:08:41.452 16:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:41.452 16:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:41.452 16:21:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:41.452 00:08:41.452 real 0m3.297s 00:08:41.452 user 0m4.155s 00:08:41.452 sys 0m0.537s 00:08:41.452 16:21:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.452 16:21:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.712 16:21:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:08:41.712 16:21:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:41.712 16:21:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:41.712 16:21:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:41.712 ************************************ 00:08:41.712 START TEST raid_write_error_test 00:08:41.712 ************************************ 00:08:41.712 16:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:08:41.712 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:41.712 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:41.712 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:41.712 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:41.712 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:41.712 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:41.712 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:41.712 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:41.712 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:41.712 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:41.712 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:41.712 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:41.712 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:41.712 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:41.712 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:41.712 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:41.712 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:41.712 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:41.712 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:41.713 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:41.713 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:41.713 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:41.713 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:41.713 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:41.713 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:41.713 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MiXxc2DB1C 00:08:41.713 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78344 00:08:41.713 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:41.713 16:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78344 00:08:41.713 16:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 78344 ']' 00:08:41.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.713 16:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.713 16:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.713 16:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.713 16:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.713 16:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.713 [2024-11-28 16:21:33.391731] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:41.713 [2024-11-28 16:21:33.391891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78344 ] 00:08:41.972 [2024-11-28 16:21:33.546074] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.972 [2024-11-28 16:21:33.590048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.972 [2024-11-28 16:21:33.631828] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.972 [2024-11-28 16:21:33.631876] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.542 BaseBdev1_malloc 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.542 true 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.542 [2024-11-28 16:21:34.241982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:42.542 [2024-11-28 16:21:34.242053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.542 [2024-11-28 16:21:34.242071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:42.542 [2024-11-28 16:21:34.242080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.542 [2024-11-28 16:21:34.244072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.542 [2024-11-28 16:21:34.244109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:42.542 BaseBdev1 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.542 BaseBdev2_malloc 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.542 true 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.542 [2024-11-28 16:21:34.294806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:42.542 [2024-11-28 16:21:34.294883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.542 [2024-11-28 16:21:34.294905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:42.542 [2024-11-28 16:21:34.294915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.542 [2024-11-28 16:21:34.297363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.542 [2024-11-28 16:21:34.297406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:42.542 BaseBdev2 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.542 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.802 BaseBdev3_malloc 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.802 true 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.802 [2024-11-28 16:21:34.335141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:42.802 [2024-11-28 16:21:34.335192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.802 [2024-11-28 16:21:34.335210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:42.802 [2024-11-28 16:21:34.335218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.802 [2024-11-28 16:21:34.337176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.802 [2024-11-28 16:21:34.337212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:42.802 BaseBdev3 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.802 [2024-11-28 16:21:34.347178] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.802 [2024-11-28 16:21:34.348922] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:42.802 [2024-11-28 16:21:34.348999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:42.802 [2024-11-28 16:21:34.349161] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:42.802 [2024-11-28 16:21:34.349183] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:42.802 [2024-11-28 16:21:34.349408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:42.802 [2024-11-28 16:21:34.349546] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:42.802 [2024-11-28 16:21:34.349560] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:42.802 [2024-11-28 16:21:34.349688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.802 "name": "raid_bdev1", 00:08:42.802 "uuid": "e28f808d-eabf-4d97-9737-087836ff0554", 00:08:42.802 "strip_size_kb": 64, 00:08:42.802 "state": "online", 00:08:42.802 "raid_level": "concat", 00:08:42.802 "superblock": true, 00:08:42.802 "num_base_bdevs": 3, 00:08:42.802 "num_base_bdevs_discovered": 3, 00:08:42.802 "num_base_bdevs_operational": 3, 00:08:42.802 "base_bdevs_list": [ 00:08:42.802 { 00:08:42.802 "name": "BaseBdev1", 00:08:42.802 "uuid": "9b934d2d-9c1a-5662-9566-e933a51a2140", 00:08:42.802 "is_configured": true, 00:08:42.802 "data_offset": 2048, 00:08:42.802 "data_size": 63488 00:08:42.802 }, 00:08:42.802 { 00:08:42.802 "name": "BaseBdev2", 00:08:42.802 "uuid": "32451f26-1420-5ef0-8633-78626ace91fb", 00:08:42.802 "is_configured": true, 00:08:42.802 "data_offset": 2048, 00:08:42.802 "data_size": 63488 00:08:42.802 }, 00:08:42.802 { 00:08:42.802 "name": "BaseBdev3", 00:08:42.802 "uuid": "3e0f3a21-0097-529c-a97f-152a068d4a61", 00:08:42.802 "is_configured": true, 00:08:42.802 "data_offset": 2048, 00:08:42.802 "data_size": 63488 00:08:42.802 } 00:08:42.802 ] 00:08:42.802 }' 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.802 16:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.061 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:43.062 16:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:43.322 [2024-11-28 16:21:34.890624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.262 "name": "raid_bdev1", 00:08:44.262 "uuid": "e28f808d-eabf-4d97-9737-087836ff0554", 00:08:44.262 "strip_size_kb": 64, 00:08:44.262 "state": "online", 00:08:44.262 "raid_level": "concat", 00:08:44.262 "superblock": true, 00:08:44.262 "num_base_bdevs": 3, 00:08:44.262 "num_base_bdevs_discovered": 3, 00:08:44.262 "num_base_bdevs_operational": 3, 00:08:44.262 "base_bdevs_list": [ 00:08:44.262 { 00:08:44.262 "name": "BaseBdev1", 00:08:44.262 "uuid": "9b934d2d-9c1a-5662-9566-e933a51a2140", 00:08:44.262 "is_configured": true, 00:08:44.262 "data_offset": 2048, 00:08:44.262 "data_size": 63488 00:08:44.262 }, 00:08:44.262 { 00:08:44.262 "name": "BaseBdev2", 00:08:44.262 "uuid": "32451f26-1420-5ef0-8633-78626ace91fb", 00:08:44.262 "is_configured": true, 00:08:44.262 "data_offset": 2048, 00:08:44.262 "data_size": 63488 00:08:44.262 }, 00:08:44.262 { 00:08:44.262 "name": "BaseBdev3", 00:08:44.262 "uuid": "3e0f3a21-0097-529c-a97f-152a068d4a61", 00:08:44.262 "is_configured": true, 00:08:44.262 "data_offset": 2048, 00:08:44.262 "data_size": 63488 00:08:44.262 } 00:08:44.262 ] 00:08:44.262 }' 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.262 16:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.522 16:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:44.522 16:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.522 16:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.522 [2024-11-28 16:21:36.269993] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.522 [2024-11-28 16:21:36.270133] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.522 [2024-11-28 16:21:36.272587] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.522 [2024-11-28 16:21:36.272680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.522 [2024-11-28 16:21:36.272733] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.522 [2024-11-28 16:21:36.272776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:44.522 { 00:08:44.522 "results": [ 00:08:44.522 { 00:08:44.522 "job": "raid_bdev1", 00:08:44.522 "core_mask": "0x1", 00:08:44.522 "workload": "randrw", 00:08:44.522 "percentage": 50, 00:08:44.522 "status": "finished", 00:08:44.522 "queue_depth": 1, 00:08:44.523 "io_size": 131072, 00:08:44.523 "runtime": 1.380451, 00:08:44.523 "iops": 17500.802274039426, 00:08:44.523 "mibps": 2187.6002842549283, 00:08:44.523 "io_failed": 1, 00:08:44.523 "io_timeout": 0, 00:08:44.523 "avg_latency_us": 79.1969877671419, 00:08:44.523 "min_latency_us": 24.146724890829695, 00:08:44.523 "max_latency_us": 1387.989519650655 00:08:44.523 } 00:08:44.523 ], 00:08:44.523 "core_count": 1 00:08:44.523 } 00:08:44.523 16:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.523 16:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78344 00:08:44.523 16:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 78344 ']' 00:08:44.523 16:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 78344 00:08:44.523 16:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:44.523 16:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:44.523 16:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78344 00:08:44.782 killing process with pid 78344 00:08:44.782 16:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:44.782 16:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:44.782 16:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78344' 00:08:44.782 16:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 78344 00:08:44.782 [2024-11-28 16:21:36.323753] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:44.782 16:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 78344 00:08:44.782 [2024-11-28 16:21:36.348980] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:45.042 16:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MiXxc2DB1C 00:08:45.042 16:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:45.042 16:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:45.042 16:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:45.042 16:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:45.042 16:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:45.042 16:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:45.042 16:21:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:45.042 00:08:45.042 real 0m3.303s 00:08:45.042 user 0m4.185s 00:08:45.042 sys 0m0.519s 00:08:45.042 16:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.042 16:21:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.042 ************************************ 00:08:45.042 END TEST raid_write_error_test 00:08:45.042 ************************************ 00:08:45.042 16:21:36 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:45.042 16:21:36 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:08:45.042 16:21:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:45.042 16:21:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.042 16:21:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:45.042 ************************************ 00:08:45.042 START TEST raid_state_function_test 00:08:45.042 ************************************ 00:08:45.042 16:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:08:45.042 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78471 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78471' 00:08:45.043 Process raid pid: 78471 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78471 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 78471 ']' 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:45.043 16:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.043 [2024-11-28 16:21:36.759172] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:45.043 [2024-11-28 16:21:36.759371] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.303 [2024-11-28 16:21:36.914297] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.303 [2024-11-28 16:21:36.958110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.303 [2024-11-28 16:21:37.000264] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.303 [2024-11-28 16:21:37.000294] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.873 16:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:45.873 16:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:45.873 16:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:45.873 16:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.873 16:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.873 [2024-11-28 16:21:37.593375] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:45.873 [2024-11-28 16:21:37.593526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:45.873 [2024-11-28 16:21:37.593549] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:45.873 [2024-11-28 16:21:37.593560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:45.873 [2024-11-28 16:21:37.593566] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:45.873 [2024-11-28 16:21:37.593579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:45.873 16:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.873 16:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:45.873 16:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.873 16:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.873 16:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.873 16:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.873 16:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.873 16:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.873 16:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.873 16:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.873 16:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.873 16:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.873 16:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.873 16:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.873 16:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.873 16:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.133 16:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.133 "name": "Existed_Raid", 00:08:46.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.133 "strip_size_kb": 0, 00:08:46.133 "state": "configuring", 00:08:46.133 "raid_level": "raid1", 00:08:46.133 "superblock": false, 00:08:46.133 "num_base_bdevs": 3, 00:08:46.133 "num_base_bdevs_discovered": 0, 00:08:46.133 "num_base_bdevs_operational": 3, 00:08:46.133 "base_bdevs_list": [ 00:08:46.133 { 00:08:46.133 "name": "BaseBdev1", 00:08:46.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.133 "is_configured": false, 00:08:46.133 "data_offset": 0, 00:08:46.133 "data_size": 0 00:08:46.133 }, 00:08:46.133 { 00:08:46.133 "name": "BaseBdev2", 00:08:46.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.133 "is_configured": false, 00:08:46.133 "data_offset": 0, 00:08:46.133 "data_size": 0 00:08:46.133 }, 00:08:46.133 { 00:08:46.133 "name": "BaseBdev3", 00:08:46.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.133 "is_configured": false, 00:08:46.133 "data_offset": 0, 00:08:46.133 "data_size": 0 00:08:46.133 } 00:08:46.133 ] 00:08:46.133 }' 00:08:46.133 16:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.133 16:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.393 [2024-11-28 16:21:38.012566] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:46.393 [2024-11-28 16:21:38.012661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.393 [2024-11-28 16:21:38.024565] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:46.393 [2024-11-28 16:21:38.024652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:46.393 [2024-11-28 16:21:38.024694] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.393 [2024-11-28 16:21:38.024715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.393 [2024-11-28 16:21:38.024733] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:46.393 [2024-11-28 16:21:38.024753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.393 [2024-11-28 16:21:38.045459] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.393 BaseBdev1 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.393 [ 00:08:46.393 { 00:08:46.393 "name": "BaseBdev1", 00:08:46.393 "aliases": [ 00:08:46.393 "091dc25b-e152-4cfd-80fa-48a4b2546a34" 00:08:46.393 ], 00:08:46.393 "product_name": "Malloc disk", 00:08:46.393 "block_size": 512, 00:08:46.393 "num_blocks": 65536, 00:08:46.393 "uuid": "091dc25b-e152-4cfd-80fa-48a4b2546a34", 00:08:46.393 "assigned_rate_limits": { 00:08:46.393 "rw_ios_per_sec": 0, 00:08:46.393 "rw_mbytes_per_sec": 0, 00:08:46.393 "r_mbytes_per_sec": 0, 00:08:46.393 "w_mbytes_per_sec": 0 00:08:46.393 }, 00:08:46.393 "claimed": true, 00:08:46.393 "claim_type": "exclusive_write", 00:08:46.393 "zoned": false, 00:08:46.393 "supported_io_types": { 00:08:46.393 "read": true, 00:08:46.393 "write": true, 00:08:46.393 "unmap": true, 00:08:46.393 "flush": true, 00:08:46.393 "reset": true, 00:08:46.393 "nvme_admin": false, 00:08:46.393 "nvme_io": false, 00:08:46.393 "nvme_io_md": false, 00:08:46.393 "write_zeroes": true, 00:08:46.393 "zcopy": true, 00:08:46.393 "get_zone_info": false, 00:08:46.393 "zone_management": false, 00:08:46.393 "zone_append": false, 00:08:46.393 "compare": false, 00:08:46.393 "compare_and_write": false, 00:08:46.393 "abort": true, 00:08:46.393 "seek_hole": false, 00:08:46.393 "seek_data": false, 00:08:46.393 "copy": true, 00:08:46.393 "nvme_iov_md": false 00:08:46.393 }, 00:08:46.393 "memory_domains": [ 00:08:46.393 { 00:08:46.393 "dma_device_id": "system", 00:08:46.393 "dma_device_type": 1 00:08:46.393 }, 00:08:46.393 { 00:08:46.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.393 "dma_device_type": 2 00:08:46.393 } 00:08:46.393 ], 00:08:46.393 "driver_specific": {} 00:08:46.393 } 00:08:46.393 ] 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.393 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.394 "name": "Existed_Raid", 00:08:46.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.394 "strip_size_kb": 0, 00:08:46.394 "state": "configuring", 00:08:46.394 "raid_level": "raid1", 00:08:46.394 "superblock": false, 00:08:46.394 "num_base_bdevs": 3, 00:08:46.394 "num_base_bdevs_discovered": 1, 00:08:46.394 "num_base_bdevs_operational": 3, 00:08:46.394 "base_bdevs_list": [ 00:08:46.394 { 00:08:46.394 "name": "BaseBdev1", 00:08:46.394 "uuid": "091dc25b-e152-4cfd-80fa-48a4b2546a34", 00:08:46.394 "is_configured": true, 00:08:46.394 "data_offset": 0, 00:08:46.394 "data_size": 65536 00:08:46.394 }, 00:08:46.394 { 00:08:46.394 "name": "BaseBdev2", 00:08:46.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.394 "is_configured": false, 00:08:46.394 "data_offset": 0, 00:08:46.394 "data_size": 0 00:08:46.394 }, 00:08:46.394 { 00:08:46.394 "name": "BaseBdev3", 00:08:46.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.394 "is_configured": false, 00:08:46.394 "data_offset": 0, 00:08:46.394 "data_size": 0 00:08:46.394 } 00:08:46.394 ] 00:08:46.394 }' 00:08:46.394 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.394 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.962 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:46.962 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.962 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.962 [2024-11-28 16:21:38.524659] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:46.962 [2024-11-28 16:21:38.524711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:46.962 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.962 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:46.962 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.962 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.962 [2024-11-28 16:21:38.536675] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.962 [2024-11-28 16:21:38.538474] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.962 [2024-11-28 16:21:38.538519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.962 [2024-11-28 16:21:38.538530] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:46.962 [2024-11-28 16:21:38.538539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:46.962 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.962 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:46.962 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:46.962 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:46.962 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.962 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.962 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.963 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.963 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.963 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.963 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.963 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.963 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.963 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.963 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.963 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.963 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.963 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.963 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.963 "name": "Existed_Raid", 00:08:46.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.963 "strip_size_kb": 0, 00:08:46.963 "state": "configuring", 00:08:46.963 "raid_level": "raid1", 00:08:46.963 "superblock": false, 00:08:46.963 "num_base_bdevs": 3, 00:08:46.963 "num_base_bdevs_discovered": 1, 00:08:46.963 "num_base_bdevs_operational": 3, 00:08:46.963 "base_bdevs_list": [ 00:08:46.963 { 00:08:46.963 "name": "BaseBdev1", 00:08:46.963 "uuid": "091dc25b-e152-4cfd-80fa-48a4b2546a34", 00:08:46.963 "is_configured": true, 00:08:46.963 "data_offset": 0, 00:08:46.963 "data_size": 65536 00:08:46.963 }, 00:08:46.963 { 00:08:46.963 "name": "BaseBdev2", 00:08:46.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.963 "is_configured": false, 00:08:46.963 "data_offset": 0, 00:08:46.963 "data_size": 0 00:08:46.963 }, 00:08:46.963 { 00:08:46.963 "name": "BaseBdev3", 00:08:46.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.963 "is_configured": false, 00:08:46.963 "data_offset": 0, 00:08:46.963 "data_size": 0 00:08:46.963 } 00:08:46.963 ] 00:08:46.963 }' 00:08:46.963 16:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.963 16:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.533 [2024-11-28 16:21:39.038930] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:47.533 BaseBdev2 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.533 [ 00:08:47.533 { 00:08:47.533 "name": "BaseBdev2", 00:08:47.533 "aliases": [ 00:08:47.533 "5440b722-2261-4be8-a05d-2bb32302a61a" 00:08:47.533 ], 00:08:47.533 "product_name": "Malloc disk", 00:08:47.533 "block_size": 512, 00:08:47.533 "num_blocks": 65536, 00:08:47.533 "uuid": "5440b722-2261-4be8-a05d-2bb32302a61a", 00:08:47.533 "assigned_rate_limits": { 00:08:47.533 "rw_ios_per_sec": 0, 00:08:47.533 "rw_mbytes_per_sec": 0, 00:08:47.533 "r_mbytes_per_sec": 0, 00:08:47.533 "w_mbytes_per_sec": 0 00:08:47.533 }, 00:08:47.533 "claimed": true, 00:08:47.533 "claim_type": "exclusive_write", 00:08:47.533 "zoned": false, 00:08:47.533 "supported_io_types": { 00:08:47.533 "read": true, 00:08:47.533 "write": true, 00:08:47.533 "unmap": true, 00:08:47.533 "flush": true, 00:08:47.533 "reset": true, 00:08:47.533 "nvme_admin": false, 00:08:47.533 "nvme_io": false, 00:08:47.533 "nvme_io_md": false, 00:08:47.533 "write_zeroes": true, 00:08:47.533 "zcopy": true, 00:08:47.533 "get_zone_info": false, 00:08:47.533 "zone_management": false, 00:08:47.533 "zone_append": false, 00:08:47.533 "compare": false, 00:08:47.533 "compare_and_write": false, 00:08:47.533 "abort": true, 00:08:47.533 "seek_hole": false, 00:08:47.533 "seek_data": false, 00:08:47.533 "copy": true, 00:08:47.533 "nvme_iov_md": false 00:08:47.533 }, 00:08:47.533 "memory_domains": [ 00:08:47.533 { 00:08:47.533 "dma_device_id": "system", 00:08:47.533 "dma_device_type": 1 00:08:47.533 }, 00:08:47.533 { 00:08:47.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.533 "dma_device_type": 2 00:08:47.533 } 00:08:47.533 ], 00:08:47.533 "driver_specific": {} 00:08:47.533 } 00:08:47.533 ] 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.533 "name": "Existed_Raid", 00:08:47.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.533 "strip_size_kb": 0, 00:08:47.533 "state": "configuring", 00:08:47.533 "raid_level": "raid1", 00:08:47.533 "superblock": false, 00:08:47.533 "num_base_bdevs": 3, 00:08:47.533 "num_base_bdevs_discovered": 2, 00:08:47.533 "num_base_bdevs_operational": 3, 00:08:47.533 "base_bdevs_list": [ 00:08:47.533 { 00:08:47.533 "name": "BaseBdev1", 00:08:47.533 "uuid": "091dc25b-e152-4cfd-80fa-48a4b2546a34", 00:08:47.533 "is_configured": true, 00:08:47.533 "data_offset": 0, 00:08:47.533 "data_size": 65536 00:08:47.533 }, 00:08:47.533 { 00:08:47.533 "name": "BaseBdev2", 00:08:47.533 "uuid": "5440b722-2261-4be8-a05d-2bb32302a61a", 00:08:47.533 "is_configured": true, 00:08:47.533 "data_offset": 0, 00:08:47.533 "data_size": 65536 00:08:47.533 }, 00:08:47.533 { 00:08:47.533 "name": "BaseBdev3", 00:08:47.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.533 "is_configured": false, 00:08:47.533 "data_offset": 0, 00:08:47.533 "data_size": 0 00:08:47.533 } 00:08:47.533 ] 00:08:47.533 }' 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.533 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.794 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:47.794 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.794 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.794 BaseBdev3 00:08:47.794 [2024-11-28 16:21:39.505122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:47.794 [2024-11-28 16:21:39.505177] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:47.794 [2024-11-28 16:21:39.505191] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:47.794 [2024-11-28 16:21:39.505488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:47.794 [2024-11-28 16:21:39.505623] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:47.794 [2024-11-28 16:21:39.505639] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:47.794 [2024-11-28 16:21:39.505867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.794 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.794 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:47.794 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:47.794 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:47.794 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:47.794 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:47.794 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:47.794 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:47.794 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.794 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.794 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.794 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:47.794 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.794 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.794 [ 00:08:47.794 { 00:08:47.794 "name": "BaseBdev3", 00:08:47.794 "aliases": [ 00:08:47.794 "49216bdc-e6f7-4fff-b48e-cf13fe6acaf0" 00:08:47.794 ], 00:08:47.794 "product_name": "Malloc disk", 00:08:47.794 "block_size": 512, 00:08:47.794 "num_blocks": 65536, 00:08:47.794 "uuid": "49216bdc-e6f7-4fff-b48e-cf13fe6acaf0", 00:08:47.794 "assigned_rate_limits": { 00:08:47.794 "rw_ios_per_sec": 0, 00:08:47.794 "rw_mbytes_per_sec": 0, 00:08:47.794 "r_mbytes_per_sec": 0, 00:08:47.794 "w_mbytes_per_sec": 0 00:08:47.794 }, 00:08:47.794 "claimed": true, 00:08:47.794 "claim_type": "exclusive_write", 00:08:47.794 "zoned": false, 00:08:47.794 "supported_io_types": { 00:08:47.794 "read": true, 00:08:47.794 "write": true, 00:08:47.794 "unmap": true, 00:08:47.794 "flush": true, 00:08:47.794 "reset": true, 00:08:47.794 "nvme_admin": false, 00:08:47.794 "nvme_io": false, 00:08:47.794 "nvme_io_md": false, 00:08:47.794 "write_zeroes": true, 00:08:47.794 "zcopy": true, 00:08:47.794 "get_zone_info": false, 00:08:47.794 "zone_management": false, 00:08:47.794 "zone_append": false, 00:08:47.794 "compare": false, 00:08:47.794 "compare_and_write": false, 00:08:47.794 "abort": true, 00:08:47.794 "seek_hole": false, 00:08:47.794 "seek_data": false, 00:08:47.794 "copy": true, 00:08:47.794 "nvme_iov_md": false 00:08:47.794 }, 00:08:47.794 "memory_domains": [ 00:08:47.794 { 00:08:47.794 "dma_device_id": "system", 00:08:47.794 "dma_device_type": 1 00:08:47.794 }, 00:08:47.794 { 00:08:47.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.794 "dma_device_type": 2 00:08:47.794 } 00:08:47.794 ], 00:08:47.794 "driver_specific": {} 00:08:47.794 } 00:08:47.794 ] 00:08:47.794 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.795 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:47.795 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:47.795 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:47.795 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:47.795 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.795 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.795 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.795 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.795 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.795 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.795 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.795 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.795 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.795 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.795 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.795 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.795 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.067 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.067 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.067 "name": "Existed_Raid", 00:08:48.067 "uuid": "c47ae401-acca-449a-825b-7f1060f3908d", 00:08:48.067 "strip_size_kb": 0, 00:08:48.067 "state": "online", 00:08:48.067 "raid_level": "raid1", 00:08:48.067 "superblock": false, 00:08:48.067 "num_base_bdevs": 3, 00:08:48.067 "num_base_bdevs_discovered": 3, 00:08:48.067 "num_base_bdevs_operational": 3, 00:08:48.067 "base_bdevs_list": [ 00:08:48.067 { 00:08:48.067 "name": "BaseBdev1", 00:08:48.067 "uuid": "091dc25b-e152-4cfd-80fa-48a4b2546a34", 00:08:48.067 "is_configured": true, 00:08:48.067 "data_offset": 0, 00:08:48.067 "data_size": 65536 00:08:48.067 }, 00:08:48.067 { 00:08:48.067 "name": "BaseBdev2", 00:08:48.067 "uuid": "5440b722-2261-4be8-a05d-2bb32302a61a", 00:08:48.067 "is_configured": true, 00:08:48.067 "data_offset": 0, 00:08:48.067 "data_size": 65536 00:08:48.067 }, 00:08:48.067 { 00:08:48.067 "name": "BaseBdev3", 00:08:48.067 "uuid": "49216bdc-e6f7-4fff-b48e-cf13fe6acaf0", 00:08:48.067 "is_configured": true, 00:08:48.067 "data_offset": 0, 00:08:48.067 "data_size": 65536 00:08:48.067 } 00:08:48.067 ] 00:08:48.067 }' 00:08:48.067 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.067 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.340 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:48.340 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:48.340 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:48.340 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:48.340 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:48.340 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:48.340 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:48.340 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.340 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.340 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:48.340 [2024-11-28 16:21:39.948677] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.340 16:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.340 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:48.340 "name": "Existed_Raid", 00:08:48.340 "aliases": [ 00:08:48.340 "c47ae401-acca-449a-825b-7f1060f3908d" 00:08:48.340 ], 00:08:48.340 "product_name": "Raid Volume", 00:08:48.340 "block_size": 512, 00:08:48.340 "num_blocks": 65536, 00:08:48.340 "uuid": "c47ae401-acca-449a-825b-7f1060f3908d", 00:08:48.340 "assigned_rate_limits": { 00:08:48.340 "rw_ios_per_sec": 0, 00:08:48.340 "rw_mbytes_per_sec": 0, 00:08:48.340 "r_mbytes_per_sec": 0, 00:08:48.340 "w_mbytes_per_sec": 0 00:08:48.340 }, 00:08:48.340 "claimed": false, 00:08:48.340 "zoned": false, 00:08:48.340 "supported_io_types": { 00:08:48.340 "read": true, 00:08:48.340 "write": true, 00:08:48.340 "unmap": false, 00:08:48.340 "flush": false, 00:08:48.340 "reset": true, 00:08:48.340 "nvme_admin": false, 00:08:48.340 "nvme_io": false, 00:08:48.340 "nvme_io_md": false, 00:08:48.340 "write_zeroes": true, 00:08:48.340 "zcopy": false, 00:08:48.340 "get_zone_info": false, 00:08:48.340 "zone_management": false, 00:08:48.340 "zone_append": false, 00:08:48.340 "compare": false, 00:08:48.340 "compare_and_write": false, 00:08:48.340 "abort": false, 00:08:48.340 "seek_hole": false, 00:08:48.340 "seek_data": false, 00:08:48.340 "copy": false, 00:08:48.340 "nvme_iov_md": false 00:08:48.340 }, 00:08:48.340 "memory_domains": [ 00:08:48.340 { 00:08:48.340 "dma_device_id": "system", 00:08:48.340 "dma_device_type": 1 00:08:48.340 }, 00:08:48.340 { 00:08:48.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.340 "dma_device_type": 2 00:08:48.340 }, 00:08:48.340 { 00:08:48.340 "dma_device_id": "system", 00:08:48.340 "dma_device_type": 1 00:08:48.340 }, 00:08:48.340 { 00:08:48.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.340 "dma_device_type": 2 00:08:48.340 }, 00:08:48.340 { 00:08:48.340 "dma_device_id": "system", 00:08:48.340 "dma_device_type": 1 00:08:48.340 }, 00:08:48.340 { 00:08:48.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.340 "dma_device_type": 2 00:08:48.340 } 00:08:48.340 ], 00:08:48.340 "driver_specific": { 00:08:48.340 "raid": { 00:08:48.340 "uuid": "c47ae401-acca-449a-825b-7f1060f3908d", 00:08:48.340 "strip_size_kb": 0, 00:08:48.340 "state": "online", 00:08:48.340 "raid_level": "raid1", 00:08:48.340 "superblock": false, 00:08:48.340 "num_base_bdevs": 3, 00:08:48.340 "num_base_bdevs_discovered": 3, 00:08:48.340 "num_base_bdevs_operational": 3, 00:08:48.340 "base_bdevs_list": [ 00:08:48.340 { 00:08:48.340 "name": "BaseBdev1", 00:08:48.340 "uuid": "091dc25b-e152-4cfd-80fa-48a4b2546a34", 00:08:48.340 "is_configured": true, 00:08:48.340 "data_offset": 0, 00:08:48.340 "data_size": 65536 00:08:48.340 }, 00:08:48.340 { 00:08:48.340 "name": "BaseBdev2", 00:08:48.340 "uuid": "5440b722-2261-4be8-a05d-2bb32302a61a", 00:08:48.340 "is_configured": true, 00:08:48.340 "data_offset": 0, 00:08:48.340 "data_size": 65536 00:08:48.340 }, 00:08:48.340 { 00:08:48.340 "name": "BaseBdev3", 00:08:48.340 "uuid": "49216bdc-e6f7-4fff-b48e-cf13fe6acaf0", 00:08:48.340 "is_configured": true, 00:08:48.340 "data_offset": 0, 00:08:48.340 "data_size": 65536 00:08:48.340 } 00:08:48.340 ] 00:08:48.340 } 00:08:48.340 } 00:08:48.340 }' 00:08:48.340 16:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:48.340 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:48.340 BaseBdev2 00:08:48.340 BaseBdev3' 00:08:48.340 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.340 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:48.340 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.340 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:48.340 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.340 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.340 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.340 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.600 [2024-11-28 16:21:40.224012] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.600 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.600 "name": "Existed_Raid", 00:08:48.600 "uuid": "c47ae401-acca-449a-825b-7f1060f3908d", 00:08:48.600 "strip_size_kb": 0, 00:08:48.600 "state": "online", 00:08:48.600 "raid_level": "raid1", 00:08:48.600 "superblock": false, 00:08:48.600 "num_base_bdevs": 3, 00:08:48.600 "num_base_bdevs_discovered": 2, 00:08:48.600 "num_base_bdevs_operational": 2, 00:08:48.600 "base_bdevs_list": [ 00:08:48.600 { 00:08:48.600 "name": null, 00:08:48.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.600 "is_configured": false, 00:08:48.600 "data_offset": 0, 00:08:48.600 "data_size": 65536 00:08:48.600 }, 00:08:48.600 { 00:08:48.600 "name": "BaseBdev2", 00:08:48.600 "uuid": "5440b722-2261-4be8-a05d-2bb32302a61a", 00:08:48.601 "is_configured": true, 00:08:48.601 "data_offset": 0, 00:08:48.601 "data_size": 65536 00:08:48.601 }, 00:08:48.601 { 00:08:48.601 "name": "BaseBdev3", 00:08:48.601 "uuid": "49216bdc-e6f7-4fff-b48e-cf13fe6acaf0", 00:08:48.601 "is_configured": true, 00:08:48.601 "data_offset": 0, 00:08:48.601 "data_size": 65536 00:08:48.601 } 00:08:48.601 ] 00:08:48.601 }' 00:08:48.601 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.601 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.171 [2024-11-28 16:21:40.726339] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.171 [2024-11-28 16:21:40.777203] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:49.171 [2024-11-28 16:21:40.777293] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.171 [2024-11-28 16:21:40.788751] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.171 [2024-11-28 16:21:40.788805] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:49.171 [2024-11-28 16:21:40.788819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.171 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.172 BaseBdev2 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.172 [ 00:08:49.172 { 00:08:49.172 "name": "BaseBdev2", 00:08:49.172 "aliases": [ 00:08:49.172 "6478d95a-a78a-4327-98b3-1ea33ef0fb6d" 00:08:49.172 ], 00:08:49.172 "product_name": "Malloc disk", 00:08:49.172 "block_size": 512, 00:08:49.172 "num_blocks": 65536, 00:08:49.172 "uuid": "6478d95a-a78a-4327-98b3-1ea33ef0fb6d", 00:08:49.172 "assigned_rate_limits": { 00:08:49.172 "rw_ios_per_sec": 0, 00:08:49.172 "rw_mbytes_per_sec": 0, 00:08:49.172 "r_mbytes_per_sec": 0, 00:08:49.172 "w_mbytes_per_sec": 0 00:08:49.172 }, 00:08:49.172 "claimed": false, 00:08:49.172 "zoned": false, 00:08:49.172 "supported_io_types": { 00:08:49.172 "read": true, 00:08:49.172 "write": true, 00:08:49.172 "unmap": true, 00:08:49.172 "flush": true, 00:08:49.172 "reset": true, 00:08:49.172 "nvme_admin": false, 00:08:49.172 "nvme_io": false, 00:08:49.172 "nvme_io_md": false, 00:08:49.172 "write_zeroes": true, 00:08:49.172 "zcopy": true, 00:08:49.172 "get_zone_info": false, 00:08:49.172 "zone_management": false, 00:08:49.172 "zone_append": false, 00:08:49.172 "compare": false, 00:08:49.172 "compare_and_write": false, 00:08:49.172 "abort": true, 00:08:49.172 "seek_hole": false, 00:08:49.172 "seek_data": false, 00:08:49.172 "copy": true, 00:08:49.172 "nvme_iov_md": false 00:08:49.172 }, 00:08:49.172 "memory_domains": [ 00:08:49.172 { 00:08:49.172 "dma_device_id": "system", 00:08:49.172 "dma_device_type": 1 00:08:49.172 }, 00:08:49.172 { 00:08:49.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.172 "dma_device_type": 2 00:08:49.172 } 00:08:49.172 ], 00:08:49.172 "driver_specific": {} 00:08:49.172 } 00:08:49.172 ] 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.172 BaseBdev3 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.172 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.172 [ 00:08:49.172 { 00:08:49.172 "name": "BaseBdev3", 00:08:49.172 "aliases": [ 00:08:49.172 "5ca1d33b-b951-4d19-8f77-c72b5db5aa48" 00:08:49.172 ], 00:08:49.172 "product_name": "Malloc disk", 00:08:49.172 "block_size": 512, 00:08:49.172 "num_blocks": 65536, 00:08:49.172 "uuid": "5ca1d33b-b951-4d19-8f77-c72b5db5aa48", 00:08:49.172 "assigned_rate_limits": { 00:08:49.172 "rw_ios_per_sec": 0, 00:08:49.172 "rw_mbytes_per_sec": 0, 00:08:49.172 "r_mbytes_per_sec": 0, 00:08:49.172 "w_mbytes_per_sec": 0 00:08:49.172 }, 00:08:49.172 "claimed": false, 00:08:49.172 "zoned": false, 00:08:49.172 "supported_io_types": { 00:08:49.172 "read": true, 00:08:49.172 "write": true, 00:08:49.172 "unmap": true, 00:08:49.172 "flush": true, 00:08:49.172 "reset": true, 00:08:49.172 "nvme_admin": false, 00:08:49.172 "nvme_io": false, 00:08:49.172 "nvme_io_md": false, 00:08:49.172 "write_zeroes": true, 00:08:49.172 "zcopy": true, 00:08:49.172 "get_zone_info": false, 00:08:49.172 "zone_management": false, 00:08:49.172 "zone_append": false, 00:08:49.172 "compare": false, 00:08:49.433 "compare_and_write": false, 00:08:49.433 "abort": true, 00:08:49.433 "seek_hole": false, 00:08:49.433 "seek_data": false, 00:08:49.433 "copy": true, 00:08:49.433 "nvme_iov_md": false 00:08:49.433 }, 00:08:49.433 "memory_domains": [ 00:08:49.433 { 00:08:49.433 "dma_device_id": "system", 00:08:49.433 "dma_device_type": 1 00:08:49.433 }, 00:08:49.433 { 00:08:49.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.433 "dma_device_type": 2 00:08:49.433 } 00:08:49.433 ], 00:08:49.433 "driver_specific": {} 00:08:49.433 } 00:08:49.433 ] 00:08:49.433 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.433 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:49.433 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:49.433 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:49.433 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:49.433 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.433 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.433 [2024-11-28 16:21:40.951842] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:49.433 [2024-11-28 16:21:40.951962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:49.433 [2024-11-28 16:21:40.952016] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:49.433 [2024-11-28 16:21:40.953776] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:49.433 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.433 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:49.433 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.433 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.433 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:49.433 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:49.433 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.433 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.433 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.433 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.433 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.433 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.433 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.433 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.433 16:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.433 16:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.433 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.433 "name": "Existed_Raid", 00:08:49.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.433 "strip_size_kb": 0, 00:08:49.433 "state": "configuring", 00:08:49.433 "raid_level": "raid1", 00:08:49.433 "superblock": false, 00:08:49.433 "num_base_bdevs": 3, 00:08:49.433 "num_base_bdevs_discovered": 2, 00:08:49.433 "num_base_bdevs_operational": 3, 00:08:49.433 "base_bdevs_list": [ 00:08:49.433 { 00:08:49.433 "name": "BaseBdev1", 00:08:49.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.433 "is_configured": false, 00:08:49.433 "data_offset": 0, 00:08:49.433 "data_size": 0 00:08:49.433 }, 00:08:49.433 { 00:08:49.433 "name": "BaseBdev2", 00:08:49.433 "uuid": "6478d95a-a78a-4327-98b3-1ea33ef0fb6d", 00:08:49.433 "is_configured": true, 00:08:49.433 "data_offset": 0, 00:08:49.433 "data_size": 65536 00:08:49.433 }, 00:08:49.433 { 00:08:49.433 "name": "BaseBdev3", 00:08:49.433 "uuid": "5ca1d33b-b951-4d19-8f77-c72b5db5aa48", 00:08:49.433 "is_configured": true, 00:08:49.433 "data_offset": 0, 00:08:49.433 "data_size": 65536 00:08:49.433 } 00:08:49.433 ] 00:08:49.433 }' 00:08:49.433 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.433 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.695 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:49.695 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.695 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.695 [2024-11-28 16:21:41.395145] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:49.695 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.695 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:49.695 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.695 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.695 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:49.695 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:49.695 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.695 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.695 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.695 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.695 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.695 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.695 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.695 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.695 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.695 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.695 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.695 "name": "Existed_Raid", 00:08:49.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.695 "strip_size_kb": 0, 00:08:49.695 "state": "configuring", 00:08:49.695 "raid_level": "raid1", 00:08:49.695 "superblock": false, 00:08:49.695 "num_base_bdevs": 3, 00:08:49.695 "num_base_bdevs_discovered": 1, 00:08:49.695 "num_base_bdevs_operational": 3, 00:08:49.695 "base_bdevs_list": [ 00:08:49.695 { 00:08:49.695 "name": "BaseBdev1", 00:08:49.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.695 "is_configured": false, 00:08:49.695 "data_offset": 0, 00:08:49.695 "data_size": 0 00:08:49.695 }, 00:08:49.695 { 00:08:49.695 "name": null, 00:08:49.695 "uuid": "6478d95a-a78a-4327-98b3-1ea33ef0fb6d", 00:08:49.695 "is_configured": false, 00:08:49.695 "data_offset": 0, 00:08:49.695 "data_size": 65536 00:08:49.695 }, 00:08:49.695 { 00:08:49.695 "name": "BaseBdev3", 00:08:49.695 "uuid": "5ca1d33b-b951-4d19-8f77-c72b5db5aa48", 00:08:49.695 "is_configured": true, 00:08:49.695 "data_offset": 0, 00:08:49.695 "data_size": 65536 00:08:49.695 } 00:08:49.695 ] 00:08:49.695 }' 00:08:49.695 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.695 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.265 BaseBdev1 00:08:50.265 [2024-11-28 16:21:41.857262] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.265 [ 00:08:50.265 { 00:08:50.265 "name": "BaseBdev1", 00:08:50.265 "aliases": [ 00:08:50.265 "ce55a7a4-a31e-440c-8159-0e91f03d271a" 00:08:50.265 ], 00:08:50.265 "product_name": "Malloc disk", 00:08:50.265 "block_size": 512, 00:08:50.265 "num_blocks": 65536, 00:08:50.265 "uuid": "ce55a7a4-a31e-440c-8159-0e91f03d271a", 00:08:50.265 "assigned_rate_limits": { 00:08:50.265 "rw_ios_per_sec": 0, 00:08:50.265 "rw_mbytes_per_sec": 0, 00:08:50.265 "r_mbytes_per_sec": 0, 00:08:50.265 "w_mbytes_per_sec": 0 00:08:50.265 }, 00:08:50.265 "claimed": true, 00:08:50.265 "claim_type": "exclusive_write", 00:08:50.265 "zoned": false, 00:08:50.265 "supported_io_types": { 00:08:50.265 "read": true, 00:08:50.265 "write": true, 00:08:50.265 "unmap": true, 00:08:50.265 "flush": true, 00:08:50.265 "reset": true, 00:08:50.265 "nvme_admin": false, 00:08:50.265 "nvme_io": false, 00:08:50.265 "nvme_io_md": false, 00:08:50.265 "write_zeroes": true, 00:08:50.265 "zcopy": true, 00:08:50.265 "get_zone_info": false, 00:08:50.265 "zone_management": false, 00:08:50.265 "zone_append": false, 00:08:50.265 "compare": false, 00:08:50.265 "compare_and_write": false, 00:08:50.265 "abort": true, 00:08:50.265 "seek_hole": false, 00:08:50.265 "seek_data": false, 00:08:50.265 "copy": true, 00:08:50.265 "nvme_iov_md": false 00:08:50.265 }, 00:08:50.265 "memory_domains": [ 00:08:50.265 { 00:08:50.265 "dma_device_id": "system", 00:08:50.265 "dma_device_type": 1 00:08:50.265 }, 00:08:50.265 { 00:08:50.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.265 "dma_device_type": 2 00:08:50.265 } 00:08:50.265 ], 00:08:50.265 "driver_specific": {} 00:08:50.265 } 00:08:50.265 ] 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:50.265 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.266 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.266 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:50.266 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:50.266 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.266 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.266 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.266 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.266 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.266 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.266 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.266 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.266 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.266 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.266 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.266 "name": "Existed_Raid", 00:08:50.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.266 "strip_size_kb": 0, 00:08:50.266 "state": "configuring", 00:08:50.266 "raid_level": "raid1", 00:08:50.266 "superblock": false, 00:08:50.266 "num_base_bdevs": 3, 00:08:50.266 "num_base_bdevs_discovered": 2, 00:08:50.266 "num_base_bdevs_operational": 3, 00:08:50.266 "base_bdevs_list": [ 00:08:50.266 { 00:08:50.266 "name": "BaseBdev1", 00:08:50.266 "uuid": "ce55a7a4-a31e-440c-8159-0e91f03d271a", 00:08:50.266 "is_configured": true, 00:08:50.266 "data_offset": 0, 00:08:50.266 "data_size": 65536 00:08:50.266 }, 00:08:50.266 { 00:08:50.266 "name": null, 00:08:50.266 "uuid": "6478d95a-a78a-4327-98b3-1ea33ef0fb6d", 00:08:50.266 "is_configured": false, 00:08:50.266 "data_offset": 0, 00:08:50.266 "data_size": 65536 00:08:50.266 }, 00:08:50.266 { 00:08:50.266 "name": "BaseBdev3", 00:08:50.266 "uuid": "5ca1d33b-b951-4d19-8f77-c72b5db5aa48", 00:08:50.266 "is_configured": true, 00:08:50.266 "data_offset": 0, 00:08:50.266 "data_size": 65536 00:08:50.266 } 00:08:50.266 ] 00:08:50.266 }' 00:08:50.266 16:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.266 16:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.836 [2024-11-28 16:21:42.384406] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.836 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.836 "name": "Existed_Raid", 00:08:50.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.836 "strip_size_kb": 0, 00:08:50.836 "state": "configuring", 00:08:50.837 "raid_level": "raid1", 00:08:50.837 "superblock": false, 00:08:50.837 "num_base_bdevs": 3, 00:08:50.837 "num_base_bdevs_discovered": 1, 00:08:50.837 "num_base_bdevs_operational": 3, 00:08:50.837 "base_bdevs_list": [ 00:08:50.837 { 00:08:50.837 "name": "BaseBdev1", 00:08:50.837 "uuid": "ce55a7a4-a31e-440c-8159-0e91f03d271a", 00:08:50.837 "is_configured": true, 00:08:50.837 "data_offset": 0, 00:08:50.837 "data_size": 65536 00:08:50.837 }, 00:08:50.837 { 00:08:50.837 "name": null, 00:08:50.837 "uuid": "6478d95a-a78a-4327-98b3-1ea33ef0fb6d", 00:08:50.837 "is_configured": false, 00:08:50.837 "data_offset": 0, 00:08:50.837 "data_size": 65536 00:08:50.837 }, 00:08:50.837 { 00:08:50.837 "name": null, 00:08:50.837 "uuid": "5ca1d33b-b951-4d19-8f77-c72b5db5aa48", 00:08:50.837 "is_configured": false, 00:08:50.837 "data_offset": 0, 00:08:50.837 "data_size": 65536 00:08:50.837 } 00:08:50.837 ] 00:08:50.837 }' 00:08:50.837 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.837 16:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.096 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.096 16:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.096 16:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.096 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:51.096 16:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.096 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:51.096 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:51.096 16:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.096 16:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.356 [2024-11-28 16:21:42.867716] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:51.356 16:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.356 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:51.356 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.356 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.356 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.356 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.356 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.356 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.356 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.356 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.356 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.356 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.356 16:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.356 16:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.356 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.356 16:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.356 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.356 "name": "Existed_Raid", 00:08:51.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.356 "strip_size_kb": 0, 00:08:51.356 "state": "configuring", 00:08:51.356 "raid_level": "raid1", 00:08:51.356 "superblock": false, 00:08:51.356 "num_base_bdevs": 3, 00:08:51.356 "num_base_bdevs_discovered": 2, 00:08:51.356 "num_base_bdevs_operational": 3, 00:08:51.356 "base_bdevs_list": [ 00:08:51.356 { 00:08:51.356 "name": "BaseBdev1", 00:08:51.356 "uuid": "ce55a7a4-a31e-440c-8159-0e91f03d271a", 00:08:51.356 "is_configured": true, 00:08:51.356 "data_offset": 0, 00:08:51.356 "data_size": 65536 00:08:51.356 }, 00:08:51.356 { 00:08:51.356 "name": null, 00:08:51.356 "uuid": "6478d95a-a78a-4327-98b3-1ea33ef0fb6d", 00:08:51.356 "is_configured": false, 00:08:51.356 "data_offset": 0, 00:08:51.356 "data_size": 65536 00:08:51.356 }, 00:08:51.356 { 00:08:51.356 "name": "BaseBdev3", 00:08:51.356 "uuid": "5ca1d33b-b951-4d19-8f77-c72b5db5aa48", 00:08:51.356 "is_configured": true, 00:08:51.356 "data_offset": 0, 00:08:51.356 "data_size": 65536 00:08:51.356 } 00:08:51.356 ] 00:08:51.356 }' 00:08:51.356 16:21:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.356 16:21:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.616 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.616 16:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.616 16:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.617 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:51.617 16:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.617 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:51.617 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:51.617 16:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.617 16:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.617 [2024-11-28 16:21:43.358895] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:51.617 16:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.617 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:51.617 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.617 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.617 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.617 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.617 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.617 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.617 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.617 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.617 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.617 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.617 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.617 16:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.617 16:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.876 16:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.876 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.876 "name": "Existed_Raid", 00:08:51.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.876 "strip_size_kb": 0, 00:08:51.876 "state": "configuring", 00:08:51.876 "raid_level": "raid1", 00:08:51.876 "superblock": false, 00:08:51.876 "num_base_bdevs": 3, 00:08:51.876 "num_base_bdevs_discovered": 1, 00:08:51.876 "num_base_bdevs_operational": 3, 00:08:51.876 "base_bdevs_list": [ 00:08:51.876 { 00:08:51.876 "name": null, 00:08:51.876 "uuid": "ce55a7a4-a31e-440c-8159-0e91f03d271a", 00:08:51.876 "is_configured": false, 00:08:51.876 "data_offset": 0, 00:08:51.876 "data_size": 65536 00:08:51.876 }, 00:08:51.876 { 00:08:51.876 "name": null, 00:08:51.876 "uuid": "6478d95a-a78a-4327-98b3-1ea33ef0fb6d", 00:08:51.876 "is_configured": false, 00:08:51.876 "data_offset": 0, 00:08:51.876 "data_size": 65536 00:08:51.876 }, 00:08:51.876 { 00:08:51.876 "name": "BaseBdev3", 00:08:51.876 "uuid": "5ca1d33b-b951-4d19-8f77-c72b5db5aa48", 00:08:51.876 "is_configured": true, 00:08:51.876 "data_offset": 0, 00:08:51.876 "data_size": 65536 00:08:51.876 } 00:08:51.876 ] 00:08:51.876 }' 00:08:51.876 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.876 16:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.135 [2024-11-28 16:21:43.872459] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.135 16:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.394 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.394 "name": "Existed_Raid", 00:08:52.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.394 "strip_size_kb": 0, 00:08:52.394 "state": "configuring", 00:08:52.394 "raid_level": "raid1", 00:08:52.394 "superblock": false, 00:08:52.394 "num_base_bdevs": 3, 00:08:52.394 "num_base_bdevs_discovered": 2, 00:08:52.394 "num_base_bdevs_operational": 3, 00:08:52.394 "base_bdevs_list": [ 00:08:52.394 { 00:08:52.394 "name": null, 00:08:52.394 "uuid": "ce55a7a4-a31e-440c-8159-0e91f03d271a", 00:08:52.394 "is_configured": false, 00:08:52.394 "data_offset": 0, 00:08:52.394 "data_size": 65536 00:08:52.394 }, 00:08:52.394 { 00:08:52.394 "name": "BaseBdev2", 00:08:52.394 "uuid": "6478d95a-a78a-4327-98b3-1ea33ef0fb6d", 00:08:52.394 "is_configured": true, 00:08:52.394 "data_offset": 0, 00:08:52.394 "data_size": 65536 00:08:52.394 }, 00:08:52.394 { 00:08:52.394 "name": "BaseBdev3", 00:08:52.394 "uuid": "5ca1d33b-b951-4d19-8f77-c72b5db5aa48", 00:08:52.394 "is_configured": true, 00:08:52.394 "data_offset": 0, 00:08:52.394 "data_size": 65536 00:08:52.394 } 00:08:52.394 ] 00:08:52.394 }' 00:08:52.394 16:21:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.394 16:21:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.653 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.653 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.653 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:52.653 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.653 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.653 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:52.653 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.654 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.654 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:52.654 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.654 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.654 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ce55a7a4-a31e-440c-8159-0e91f03d271a 00:08:52.654 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.654 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.654 [2024-11-28 16:21:44.406676] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:52.654 [2024-11-28 16:21:44.406787] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:52.654 [2024-11-28 16:21:44.406811] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:52.654 [2024-11-28 16:21:44.407091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:52.654 [2024-11-28 16:21:44.407270] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:52.654 [2024-11-28 16:21:44.407315] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:52.654 [2024-11-28 16:21:44.407524] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.654 NewBaseBdev 00:08:52.654 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.654 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:52.654 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:52.654 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:52.654 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:52.654 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:52.654 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:52.654 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:52.654 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.654 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.654 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.654 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:52.654 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.654 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.914 [ 00:08:52.914 { 00:08:52.914 "name": "NewBaseBdev", 00:08:52.914 "aliases": [ 00:08:52.914 "ce55a7a4-a31e-440c-8159-0e91f03d271a" 00:08:52.914 ], 00:08:52.914 "product_name": "Malloc disk", 00:08:52.914 "block_size": 512, 00:08:52.914 "num_blocks": 65536, 00:08:52.914 "uuid": "ce55a7a4-a31e-440c-8159-0e91f03d271a", 00:08:52.914 "assigned_rate_limits": { 00:08:52.914 "rw_ios_per_sec": 0, 00:08:52.914 "rw_mbytes_per_sec": 0, 00:08:52.914 "r_mbytes_per_sec": 0, 00:08:52.914 "w_mbytes_per_sec": 0 00:08:52.914 }, 00:08:52.914 "claimed": true, 00:08:52.914 "claim_type": "exclusive_write", 00:08:52.914 "zoned": false, 00:08:52.914 "supported_io_types": { 00:08:52.914 "read": true, 00:08:52.914 "write": true, 00:08:52.914 "unmap": true, 00:08:52.914 "flush": true, 00:08:52.914 "reset": true, 00:08:52.914 "nvme_admin": false, 00:08:52.914 "nvme_io": false, 00:08:52.914 "nvme_io_md": false, 00:08:52.914 "write_zeroes": true, 00:08:52.914 "zcopy": true, 00:08:52.914 "get_zone_info": false, 00:08:52.914 "zone_management": false, 00:08:52.914 "zone_append": false, 00:08:52.914 "compare": false, 00:08:52.914 "compare_and_write": false, 00:08:52.914 "abort": true, 00:08:52.914 "seek_hole": false, 00:08:52.914 "seek_data": false, 00:08:52.914 "copy": true, 00:08:52.914 "nvme_iov_md": false 00:08:52.914 }, 00:08:52.914 "memory_domains": [ 00:08:52.914 { 00:08:52.914 "dma_device_id": "system", 00:08:52.914 "dma_device_type": 1 00:08:52.914 }, 00:08:52.914 { 00:08:52.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.914 "dma_device_type": 2 00:08:52.914 } 00:08:52.914 ], 00:08:52.914 "driver_specific": {} 00:08:52.914 } 00:08:52.914 ] 00:08:52.914 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.914 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:52.914 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:52.914 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.914 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.914 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.914 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.914 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.914 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.914 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.914 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.914 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.914 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.914 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.914 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.914 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.914 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.914 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.914 "name": "Existed_Raid", 00:08:52.914 "uuid": "3de7bcba-9e6f-412f-b5af-4471545e5263", 00:08:52.914 "strip_size_kb": 0, 00:08:52.914 "state": "online", 00:08:52.914 "raid_level": "raid1", 00:08:52.914 "superblock": false, 00:08:52.914 "num_base_bdevs": 3, 00:08:52.914 "num_base_bdevs_discovered": 3, 00:08:52.914 "num_base_bdevs_operational": 3, 00:08:52.914 "base_bdevs_list": [ 00:08:52.914 { 00:08:52.914 "name": "NewBaseBdev", 00:08:52.914 "uuid": "ce55a7a4-a31e-440c-8159-0e91f03d271a", 00:08:52.914 "is_configured": true, 00:08:52.914 "data_offset": 0, 00:08:52.914 "data_size": 65536 00:08:52.914 }, 00:08:52.914 { 00:08:52.914 "name": "BaseBdev2", 00:08:52.914 "uuid": "6478d95a-a78a-4327-98b3-1ea33ef0fb6d", 00:08:52.914 "is_configured": true, 00:08:52.914 "data_offset": 0, 00:08:52.914 "data_size": 65536 00:08:52.914 }, 00:08:52.914 { 00:08:52.914 "name": "BaseBdev3", 00:08:52.914 "uuid": "5ca1d33b-b951-4d19-8f77-c72b5db5aa48", 00:08:52.914 "is_configured": true, 00:08:52.914 "data_offset": 0, 00:08:52.914 "data_size": 65536 00:08:52.914 } 00:08:52.914 ] 00:08:52.914 }' 00:08:52.914 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.914 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.174 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:53.174 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:53.174 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:53.174 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:53.174 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:53.174 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:53.174 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:53.174 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:53.174 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.174 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.174 [2024-11-28 16:21:44.890194] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.174 16:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.174 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:53.174 "name": "Existed_Raid", 00:08:53.174 "aliases": [ 00:08:53.174 "3de7bcba-9e6f-412f-b5af-4471545e5263" 00:08:53.174 ], 00:08:53.174 "product_name": "Raid Volume", 00:08:53.174 "block_size": 512, 00:08:53.174 "num_blocks": 65536, 00:08:53.174 "uuid": "3de7bcba-9e6f-412f-b5af-4471545e5263", 00:08:53.174 "assigned_rate_limits": { 00:08:53.174 "rw_ios_per_sec": 0, 00:08:53.174 "rw_mbytes_per_sec": 0, 00:08:53.174 "r_mbytes_per_sec": 0, 00:08:53.174 "w_mbytes_per_sec": 0 00:08:53.174 }, 00:08:53.174 "claimed": false, 00:08:53.174 "zoned": false, 00:08:53.174 "supported_io_types": { 00:08:53.174 "read": true, 00:08:53.174 "write": true, 00:08:53.174 "unmap": false, 00:08:53.174 "flush": false, 00:08:53.174 "reset": true, 00:08:53.174 "nvme_admin": false, 00:08:53.174 "nvme_io": false, 00:08:53.174 "nvme_io_md": false, 00:08:53.174 "write_zeroes": true, 00:08:53.174 "zcopy": false, 00:08:53.174 "get_zone_info": false, 00:08:53.174 "zone_management": false, 00:08:53.174 "zone_append": false, 00:08:53.174 "compare": false, 00:08:53.174 "compare_and_write": false, 00:08:53.174 "abort": false, 00:08:53.174 "seek_hole": false, 00:08:53.174 "seek_data": false, 00:08:53.174 "copy": false, 00:08:53.174 "nvme_iov_md": false 00:08:53.174 }, 00:08:53.174 "memory_domains": [ 00:08:53.175 { 00:08:53.175 "dma_device_id": "system", 00:08:53.175 "dma_device_type": 1 00:08:53.175 }, 00:08:53.175 { 00:08:53.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.175 "dma_device_type": 2 00:08:53.175 }, 00:08:53.175 { 00:08:53.175 "dma_device_id": "system", 00:08:53.175 "dma_device_type": 1 00:08:53.175 }, 00:08:53.175 { 00:08:53.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.175 "dma_device_type": 2 00:08:53.175 }, 00:08:53.175 { 00:08:53.175 "dma_device_id": "system", 00:08:53.175 "dma_device_type": 1 00:08:53.175 }, 00:08:53.175 { 00:08:53.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.175 "dma_device_type": 2 00:08:53.175 } 00:08:53.175 ], 00:08:53.175 "driver_specific": { 00:08:53.175 "raid": { 00:08:53.175 "uuid": "3de7bcba-9e6f-412f-b5af-4471545e5263", 00:08:53.175 "strip_size_kb": 0, 00:08:53.175 "state": "online", 00:08:53.175 "raid_level": "raid1", 00:08:53.175 "superblock": false, 00:08:53.175 "num_base_bdevs": 3, 00:08:53.175 "num_base_bdevs_discovered": 3, 00:08:53.175 "num_base_bdevs_operational": 3, 00:08:53.175 "base_bdevs_list": [ 00:08:53.175 { 00:08:53.175 "name": "NewBaseBdev", 00:08:53.175 "uuid": "ce55a7a4-a31e-440c-8159-0e91f03d271a", 00:08:53.175 "is_configured": true, 00:08:53.175 "data_offset": 0, 00:08:53.175 "data_size": 65536 00:08:53.175 }, 00:08:53.175 { 00:08:53.175 "name": "BaseBdev2", 00:08:53.175 "uuid": "6478d95a-a78a-4327-98b3-1ea33ef0fb6d", 00:08:53.175 "is_configured": true, 00:08:53.175 "data_offset": 0, 00:08:53.175 "data_size": 65536 00:08:53.175 }, 00:08:53.175 { 00:08:53.175 "name": "BaseBdev3", 00:08:53.175 "uuid": "5ca1d33b-b951-4d19-8f77-c72b5db5aa48", 00:08:53.175 "is_configured": true, 00:08:53.175 "data_offset": 0, 00:08:53.175 "data_size": 65536 00:08:53.175 } 00:08:53.175 ] 00:08:53.175 } 00:08:53.175 } 00:08:53.175 }' 00:08:53.175 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:53.444 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:53.444 BaseBdev2 00:08:53.444 BaseBdev3' 00:08:53.444 16:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.444 16:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:53.444 16:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.444 16:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:53.444 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.444 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.444 16:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.444 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.444 16:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.444 16:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.444 16:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.444 16:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.445 [2024-11-28 16:21:45.177385] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.445 [2024-11-28 16:21:45.177416] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:53.445 [2024-11-28 16:21:45.177481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.445 [2024-11-28 16:21:45.177725] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:53.445 [2024-11-28 16:21:45.177735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78471 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 78471 ']' 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 78471 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:53.445 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78471 00:08:53.711 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:53.711 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:53.711 killing process with pid 78471 00:08:53.711 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78471' 00:08:53.711 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 78471 00:08:53.711 [2024-11-28 16:21:45.218648] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:53.711 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 78471 00:08:53.711 [2024-11-28 16:21:45.248997] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:53.971 00:08:53.971 real 0m8.823s 00:08:53.971 user 0m15.000s 00:08:53.971 sys 0m1.776s 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.971 ************************************ 00:08:53.971 END TEST raid_state_function_test 00:08:53.971 ************************************ 00:08:53.971 16:21:45 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:08:53.971 16:21:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:53.971 16:21:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.971 16:21:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:53.971 ************************************ 00:08:53.971 START TEST raid_state_function_test_sb 00:08:53.971 ************************************ 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=79080 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:53.971 Process raid pid: 79080 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79080' 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 79080 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 79080 ']' 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:53.971 16:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.971 [2024-11-28 16:21:45.657995] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:53.971 [2024-11-28 16:21:45.658143] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.231 [2024-11-28 16:21:45.811166] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.231 [2024-11-28 16:21:45.855567] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.231 [2024-11-28 16:21:45.898478] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.231 [2024-11-28 16:21:45.898517] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.803 [2024-11-28 16:21:46.484008] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:54.803 [2024-11-28 16:21:46.484067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:54.803 [2024-11-28 16:21:46.484079] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:54.803 [2024-11-28 16:21:46.484088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:54.803 [2024-11-28 16:21:46.484094] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:54.803 [2024-11-28 16:21:46.484105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.803 "name": "Existed_Raid", 00:08:54.803 "uuid": "55bde364-d91c-4dbe-bb4f-7e66d30ecabf", 00:08:54.803 "strip_size_kb": 0, 00:08:54.803 "state": "configuring", 00:08:54.803 "raid_level": "raid1", 00:08:54.803 "superblock": true, 00:08:54.803 "num_base_bdevs": 3, 00:08:54.803 "num_base_bdevs_discovered": 0, 00:08:54.803 "num_base_bdevs_operational": 3, 00:08:54.803 "base_bdevs_list": [ 00:08:54.803 { 00:08:54.803 "name": "BaseBdev1", 00:08:54.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.803 "is_configured": false, 00:08:54.803 "data_offset": 0, 00:08:54.803 "data_size": 0 00:08:54.803 }, 00:08:54.803 { 00:08:54.803 "name": "BaseBdev2", 00:08:54.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.803 "is_configured": false, 00:08:54.803 "data_offset": 0, 00:08:54.803 "data_size": 0 00:08:54.803 }, 00:08:54.803 { 00:08:54.803 "name": "BaseBdev3", 00:08:54.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.803 "is_configured": false, 00:08:54.803 "data_offset": 0, 00:08:54.803 "data_size": 0 00:08:54.803 } 00:08:54.803 ] 00:08:54.803 }' 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.803 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.374 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.374 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.374 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.374 [2024-11-28 16:21:46.963182] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.374 [2024-11-28 16:21:46.963283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:55.374 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.374 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:55.374 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.374 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.374 [2024-11-28 16:21:46.971193] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:55.374 [2024-11-28 16:21:46.971277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:55.374 [2024-11-28 16:21:46.971322] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.374 [2024-11-28 16:21:46.971344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.374 [2024-11-28 16:21:46.971395] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:55.374 [2024-11-28 16:21:46.971416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:55.374 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.374 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:55.374 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.374 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.374 [2024-11-28 16:21:46.987827] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.374 BaseBdev1 00:08:55.374 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.374 16:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:55.374 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:55.374 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:55.374 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:55.374 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:55.374 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:55.374 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:55.374 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.374 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.374 16:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.374 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:55.374 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.374 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.374 [ 00:08:55.374 { 00:08:55.374 "name": "BaseBdev1", 00:08:55.374 "aliases": [ 00:08:55.374 "74957663-bc1c-4991-baab-3009ec45e689" 00:08:55.374 ], 00:08:55.374 "product_name": "Malloc disk", 00:08:55.374 "block_size": 512, 00:08:55.374 "num_blocks": 65536, 00:08:55.374 "uuid": "74957663-bc1c-4991-baab-3009ec45e689", 00:08:55.374 "assigned_rate_limits": { 00:08:55.374 "rw_ios_per_sec": 0, 00:08:55.374 "rw_mbytes_per_sec": 0, 00:08:55.374 "r_mbytes_per_sec": 0, 00:08:55.374 "w_mbytes_per_sec": 0 00:08:55.374 }, 00:08:55.374 "claimed": true, 00:08:55.374 "claim_type": "exclusive_write", 00:08:55.374 "zoned": false, 00:08:55.374 "supported_io_types": { 00:08:55.374 "read": true, 00:08:55.374 "write": true, 00:08:55.374 "unmap": true, 00:08:55.374 "flush": true, 00:08:55.374 "reset": true, 00:08:55.374 "nvme_admin": false, 00:08:55.374 "nvme_io": false, 00:08:55.374 "nvme_io_md": false, 00:08:55.374 "write_zeroes": true, 00:08:55.374 "zcopy": true, 00:08:55.374 "get_zone_info": false, 00:08:55.374 "zone_management": false, 00:08:55.374 "zone_append": false, 00:08:55.374 "compare": false, 00:08:55.374 "compare_and_write": false, 00:08:55.374 "abort": true, 00:08:55.374 "seek_hole": false, 00:08:55.374 "seek_data": false, 00:08:55.374 "copy": true, 00:08:55.374 "nvme_iov_md": false 00:08:55.374 }, 00:08:55.374 "memory_domains": [ 00:08:55.374 { 00:08:55.374 "dma_device_id": "system", 00:08:55.374 "dma_device_type": 1 00:08:55.374 }, 00:08:55.374 { 00:08:55.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.374 "dma_device_type": 2 00:08:55.374 } 00:08:55.374 ], 00:08:55.374 "driver_specific": {} 00:08:55.374 } 00:08:55.374 ] 00:08:55.374 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.374 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:55.374 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:55.374 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.375 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.375 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.375 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.375 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.375 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.375 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.375 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.375 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.375 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.375 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.375 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.375 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.375 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.375 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.375 "name": "Existed_Raid", 00:08:55.375 "uuid": "1a1aa3db-b067-4c16-81d1-3f2522c95b82", 00:08:55.375 "strip_size_kb": 0, 00:08:55.375 "state": "configuring", 00:08:55.375 "raid_level": "raid1", 00:08:55.375 "superblock": true, 00:08:55.375 "num_base_bdevs": 3, 00:08:55.375 "num_base_bdevs_discovered": 1, 00:08:55.375 "num_base_bdevs_operational": 3, 00:08:55.375 "base_bdevs_list": [ 00:08:55.375 { 00:08:55.375 "name": "BaseBdev1", 00:08:55.375 "uuid": "74957663-bc1c-4991-baab-3009ec45e689", 00:08:55.375 "is_configured": true, 00:08:55.375 "data_offset": 2048, 00:08:55.375 "data_size": 63488 00:08:55.375 }, 00:08:55.375 { 00:08:55.375 "name": "BaseBdev2", 00:08:55.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.375 "is_configured": false, 00:08:55.375 "data_offset": 0, 00:08:55.375 "data_size": 0 00:08:55.375 }, 00:08:55.375 { 00:08:55.375 "name": "BaseBdev3", 00:08:55.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.375 "is_configured": false, 00:08:55.375 "data_offset": 0, 00:08:55.375 "data_size": 0 00:08:55.375 } 00:08:55.375 ] 00:08:55.375 }' 00:08:55.375 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.375 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.946 [2024-11-28 16:21:47.463410] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.946 [2024-11-28 16:21:47.463516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.946 [2024-11-28 16:21:47.475432] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.946 [2024-11-28 16:21:47.477239] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.946 [2024-11-28 16:21:47.477283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.946 [2024-11-28 16:21:47.477292] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:55.946 [2024-11-28 16:21:47.477318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.946 "name": "Existed_Raid", 00:08:55.946 "uuid": "5e7b8980-7a13-4041-9459-f0e71c26053e", 00:08:55.946 "strip_size_kb": 0, 00:08:55.946 "state": "configuring", 00:08:55.946 "raid_level": "raid1", 00:08:55.946 "superblock": true, 00:08:55.946 "num_base_bdevs": 3, 00:08:55.946 "num_base_bdevs_discovered": 1, 00:08:55.946 "num_base_bdevs_operational": 3, 00:08:55.946 "base_bdevs_list": [ 00:08:55.946 { 00:08:55.946 "name": "BaseBdev1", 00:08:55.946 "uuid": "74957663-bc1c-4991-baab-3009ec45e689", 00:08:55.946 "is_configured": true, 00:08:55.946 "data_offset": 2048, 00:08:55.946 "data_size": 63488 00:08:55.946 }, 00:08:55.946 { 00:08:55.946 "name": "BaseBdev2", 00:08:55.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.946 "is_configured": false, 00:08:55.946 "data_offset": 0, 00:08:55.946 "data_size": 0 00:08:55.946 }, 00:08:55.946 { 00:08:55.946 "name": "BaseBdev3", 00:08:55.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.946 "is_configured": false, 00:08:55.946 "data_offset": 0, 00:08:55.946 "data_size": 0 00:08:55.946 } 00:08:55.946 ] 00:08:55.946 }' 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.946 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.205 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:56.205 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.205 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.205 [2024-11-28 16:21:47.890888] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.205 BaseBdev2 00:08:56.205 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.205 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:56.205 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:56.205 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:56.205 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:56.205 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:56.205 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:56.205 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:56.205 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.205 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.205 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.205 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:56.205 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.205 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.205 [ 00:08:56.205 { 00:08:56.205 "name": "BaseBdev2", 00:08:56.205 "aliases": [ 00:08:56.205 "e61cac4c-4a85-4d1a-85e0-eb609b83ffbb" 00:08:56.205 ], 00:08:56.205 "product_name": "Malloc disk", 00:08:56.205 "block_size": 512, 00:08:56.205 "num_blocks": 65536, 00:08:56.205 "uuid": "e61cac4c-4a85-4d1a-85e0-eb609b83ffbb", 00:08:56.205 "assigned_rate_limits": { 00:08:56.205 "rw_ios_per_sec": 0, 00:08:56.205 "rw_mbytes_per_sec": 0, 00:08:56.205 "r_mbytes_per_sec": 0, 00:08:56.205 "w_mbytes_per_sec": 0 00:08:56.205 }, 00:08:56.205 "claimed": true, 00:08:56.205 "claim_type": "exclusive_write", 00:08:56.205 "zoned": false, 00:08:56.205 "supported_io_types": { 00:08:56.205 "read": true, 00:08:56.205 "write": true, 00:08:56.205 "unmap": true, 00:08:56.205 "flush": true, 00:08:56.205 "reset": true, 00:08:56.205 "nvme_admin": false, 00:08:56.206 "nvme_io": false, 00:08:56.206 "nvme_io_md": false, 00:08:56.206 "write_zeroes": true, 00:08:56.206 "zcopy": true, 00:08:56.206 "get_zone_info": false, 00:08:56.206 "zone_management": false, 00:08:56.206 "zone_append": false, 00:08:56.206 "compare": false, 00:08:56.206 "compare_and_write": false, 00:08:56.206 "abort": true, 00:08:56.206 "seek_hole": false, 00:08:56.206 "seek_data": false, 00:08:56.206 "copy": true, 00:08:56.206 "nvme_iov_md": false 00:08:56.206 }, 00:08:56.206 "memory_domains": [ 00:08:56.206 { 00:08:56.206 "dma_device_id": "system", 00:08:56.206 "dma_device_type": 1 00:08:56.206 }, 00:08:56.206 { 00:08:56.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.206 "dma_device_type": 2 00:08:56.206 } 00:08:56.206 ], 00:08:56.206 "driver_specific": {} 00:08:56.206 } 00:08:56.206 ] 00:08:56.206 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.206 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:56.206 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:56.206 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.206 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:56.206 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.206 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.206 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:56.206 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:56.206 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.206 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.206 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.206 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.206 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.206 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.206 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.206 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.206 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.206 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.465 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.465 "name": "Existed_Raid", 00:08:56.465 "uuid": "5e7b8980-7a13-4041-9459-f0e71c26053e", 00:08:56.465 "strip_size_kb": 0, 00:08:56.465 "state": "configuring", 00:08:56.465 "raid_level": "raid1", 00:08:56.465 "superblock": true, 00:08:56.465 "num_base_bdevs": 3, 00:08:56.465 "num_base_bdevs_discovered": 2, 00:08:56.465 "num_base_bdevs_operational": 3, 00:08:56.465 "base_bdevs_list": [ 00:08:56.465 { 00:08:56.465 "name": "BaseBdev1", 00:08:56.465 "uuid": "74957663-bc1c-4991-baab-3009ec45e689", 00:08:56.465 "is_configured": true, 00:08:56.465 "data_offset": 2048, 00:08:56.465 "data_size": 63488 00:08:56.465 }, 00:08:56.465 { 00:08:56.465 "name": "BaseBdev2", 00:08:56.465 "uuid": "e61cac4c-4a85-4d1a-85e0-eb609b83ffbb", 00:08:56.465 "is_configured": true, 00:08:56.465 "data_offset": 2048, 00:08:56.465 "data_size": 63488 00:08:56.465 }, 00:08:56.465 { 00:08:56.465 "name": "BaseBdev3", 00:08:56.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.465 "is_configured": false, 00:08:56.465 "data_offset": 0, 00:08:56.465 "data_size": 0 00:08:56.465 } 00:08:56.465 ] 00:08:56.465 }' 00:08:56.465 16:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.465 16:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.725 [2024-11-28 16:21:48.333060] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:56.725 [2024-11-28 16:21:48.333259] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:56.725 [2024-11-28 16:21:48.333278] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:56.725 BaseBdev3 00:08:56.725 [2024-11-28 16:21:48.333564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:56.725 [2024-11-28 16:21:48.333706] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:56.725 [2024-11-28 16:21:48.333717] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:56.725 [2024-11-28 16:21:48.333853] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.725 [ 00:08:56.725 { 00:08:56.725 "name": "BaseBdev3", 00:08:56.725 "aliases": [ 00:08:56.725 "4988e76d-e993-4d21-9f83-d4f54af1404e" 00:08:56.725 ], 00:08:56.725 "product_name": "Malloc disk", 00:08:56.725 "block_size": 512, 00:08:56.725 "num_blocks": 65536, 00:08:56.725 "uuid": "4988e76d-e993-4d21-9f83-d4f54af1404e", 00:08:56.725 "assigned_rate_limits": { 00:08:56.725 "rw_ios_per_sec": 0, 00:08:56.725 "rw_mbytes_per_sec": 0, 00:08:56.725 "r_mbytes_per_sec": 0, 00:08:56.725 "w_mbytes_per_sec": 0 00:08:56.725 }, 00:08:56.725 "claimed": true, 00:08:56.725 "claim_type": "exclusive_write", 00:08:56.725 "zoned": false, 00:08:56.725 "supported_io_types": { 00:08:56.725 "read": true, 00:08:56.725 "write": true, 00:08:56.725 "unmap": true, 00:08:56.725 "flush": true, 00:08:56.725 "reset": true, 00:08:56.725 "nvme_admin": false, 00:08:56.725 "nvme_io": false, 00:08:56.725 "nvme_io_md": false, 00:08:56.725 "write_zeroes": true, 00:08:56.725 "zcopy": true, 00:08:56.725 "get_zone_info": false, 00:08:56.725 "zone_management": false, 00:08:56.725 "zone_append": false, 00:08:56.725 "compare": false, 00:08:56.725 "compare_and_write": false, 00:08:56.725 "abort": true, 00:08:56.725 "seek_hole": false, 00:08:56.725 "seek_data": false, 00:08:56.725 "copy": true, 00:08:56.725 "nvme_iov_md": false 00:08:56.725 }, 00:08:56.725 "memory_domains": [ 00:08:56.725 { 00:08:56.725 "dma_device_id": "system", 00:08:56.725 "dma_device_type": 1 00:08:56.725 }, 00:08:56.725 { 00:08:56.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.725 "dma_device_type": 2 00:08:56.725 } 00:08:56.725 ], 00:08:56.725 "driver_specific": {} 00:08:56.725 } 00:08:56.725 ] 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.725 "name": "Existed_Raid", 00:08:56.725 "uuid": "5e7b8980-7a13-4041-9459-f0e71c26053e", 00:08:56.725 "strip_size_kb": 0, 00:08:56.725 "state": "online", 00:08:56.725 "raid_level": "raid1", 00:08:56.725 "superblock": true, 00:08:56.725 "num_base_bdevs": 3, 00:08:56.725 "num_base_bdevs_discovered": 3, 00:08:56.725 "num_base_bdevs_operational": 3, 00:08:56.725 "base_bdevs_list": [ 00:08:56.725 { 00:08:56.725 "name": "BaseBdev1", 00:08:56.725 "uuid": "74957663-bc1c-4991-baab-3009ec45e689", 00:08:56.725 "is_configured": true, 00:08:56.725 "data_offset": 2048, 00:08:56.725 "data_size": 63488 00:08:56.725 }, 00:08:56.725 { 00:08:56.725 "name": "BaseBdev2", 00:08:56.725 "uuid": "e61cac4c-4a85-4d1a-85e0-eb609b83ffbb", 00:08:56.725 "is_configured": true, 00:08:56.725 "data_offset": 2048, 00:08:56.725 "data_size": 63488 00:08:56.725 }, 00:08:56.725 { 00:08:56.725 "name": "BaseBdev3", 00:08:56.725 "uuid": "4988e76d-e993-4d21-9f83-d4f54af1404e", 00:08:56.725 "is_configured": true, 00:08:56.725 "data_offset": 2048, 00:08:56.725 "data_size": 63488 00:08:56.725 } 00:08:56.725 ] 00:08:56.725 }' 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.725 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.295 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:57.295 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:57.295 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:57.295 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:57.295 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:57.295 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:57.295 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:57.295 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.295 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.295 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:57.295 [2024-11-28 16:21:48.784608] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.295 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.295 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:57.295 "name": "Existed_Raid", 00:08:57.295 "aliases": [ 00:08:57.295 "5e7b8980-7a13-4041-9459-f0e71c26053e" 00:08:57.295 ], 00:08:57.295 "product_name": "Raid Volume", 00:08:57.295 "block_size": 512, 00:08:57.295 "num_blocks": 63488, 00:08:57.295 "uuid": "5e7b8980-7a13-4041-9459-f0e71c26053e", 00:08:57.295 "assigned_rate_limits": { 00:08:57.295 "rw_ios_per_sec": 0, 00:08:57.295 "rw_mbytes_per_sec": 0, 00:08:57.295 "r_mbytes_per_sec": 0, 00:08:57.295 "w_mbytes_per_sec": 0 00:08:57.295 }, 00:08:57.295 "claimed": false, 00:08:57.295 "zoned": false, 00:08:57.295 "supported_io_types": { 00:08:57.295 "read": true, 00:08:57.295 "write": true, 00:08:57.295 "unmap": false, 00:08:57.295 "flush": false, 00:08:57.295 "reset": true, 00:08:57.295 "nvme_admin": false, 00:08:57.295 "nvme_io": false, 00:08:57.295 "nvme_io_md": false, 00:08:57.295 "write_zeroes": true, 00:08:57.295 "zcopy": false, 00:08:57.295 "get_zone_info": false, 00:08:57.295 "zone_management": false, 00:08:57.295 "zone_append": false, 00:08:57.295 "compare": false, 00:08:57.295 "compare_and_write": false, 00:08:57.295 "abort": false, 00:08:57.295 "seek_hole": false, 00:08:57.295 "seek_data": false, 00:08:57.295 "copy": false, 00:08:57.295 "nvme_iov_md": false 00:08:57.295 }, 00:08:57.295 "memory_domains": [ 00:08:57.295 { 00:08:57.295 "dma_device_id": "system", 00:08:57.295 "dma_device_type": 1 00:08:57.295 }, 00:08:57.295 { 00:08:57.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.295 "dma_device_type": 2 00:08:57.295 }, 00:08:57.295 { 00:08:57.295 "dma_device_id": "system", 00:08:57.295 "dma_device_type": 1 00:08:57.295 }, 00:08:57.295 { 00:08:57.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.295 "dma_device_type": 2 00:08:57.295 }, 00:08:57.295 { 00:08:57.295 "dma_device_id": "system", 00:08:57.295 "dma_device_type": 1 00:08:57.295 }, 00:08:57.295 { 00:08:57.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.295 "dma_device_type": 2 00:08:57.295 } 00:08:57.295 ], 00:08:57.295 "driver_specific": { 00:08:57.295 "raid": { 00:08:57.295 "uuid": "5e7b8980-7a13-4041-9459-f0e71c26053e", 00:08:57.295 "strip_size_kb": 0, 00:08:57.295 "state": "online", 00:08:57.295 "raid_level": "raid1", 00:08:57.295 "superblock": true, 00:08:57.295 "num_base_bdevs": 3, 00:08:57.295 "num_base_bdevs_discovered": 3, 00:08:57.295 "num_base_bdevs_operational": 3, 00:08:57.295 "base_bdevs_list": [ 00:08:57.295 { 00:08:57.295 "name": "BaseBdev1", 00:08:57.295 "uuid": "74957663-bc1c-4991-baab-3009ec45e689", 00:08:57.295 "is_configured": true, 00:08:57.295 "data_offset": 2048, 00:08:57.295 "data_size": 63488 00:08:57.295 }, 00:08:57.295 { 00:08:57.295 "name": "BaseBdev2", 00:08:57.295 "uuid": "e61cac4c-4a85-4d1a-85e0-eb609b83ffbb", 00:08:57.295 "is_configured": true, 00:08:57.295 "data_offset": 2048, 00:08:57.295 "data_size": 63488 00:08:57.295 }, 00:08:57.295 { 00:08:57.296 "name": "BaseBdev3", 00:08:57.296 "uuid": "4988e76d-e993-4d21-9f83-d4f54af1404e", 00:08:57.296 "is_configured": true, 00:08:57.296 "data_offset": 2048, 00:08:57.296 "data_size": 63488 00:08:57.296 } 00:08:57.296 ] 00:08:57.296 } 00:08:57.296 } 00:08:57.296 }' 00:08:57.296 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:57.296 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:57.296 BaseBdev2 00:08:57.296 BaseBdev3' 00:08:57.296 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.296 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:57.296 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.296 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.296 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:57.296 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.296 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.296 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.296 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.296 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.296 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.296 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:57.296 16:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.296 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.296 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.296 16:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.296 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.296 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.296 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.296 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:57.296 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.296 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.296 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.296 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.296 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.296 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.556 [2024-11-28 16:21:49.071935] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.556 "name": "Existed_Raid", 00:08:57.556 "uuid": "5e7b8980-7a13-4041-9459-f0e71c26053e", 00:08:57.556 "strip_size_kb": 0, 00:08:57.556 "state": "online", 00:08:57.556 "raid_level": "raid1", 00:08:57.556 "superblock": true, 00:08:57.556 "num_base_bdevs": 3, 00:08:57.556 "num_base_bdevs_discovered": 2, 00:08:57.556 "num_base_bdevs_operational": 2, 00:08:57.556 "base_bdevs_list": [ 00:08:57.556 { 00:08:57.556 "name": null, 00:08:57.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.556 "is_configured": false, 00:08:57.556 "data_offset": 0, 00:08:57.556 "data_size": 63488 00:08:57.556 }, 00:08:57.556 { 00:08:57.556 "name": "BaseBdev2", 00:08:57.556 "uuid": "e61cac4c-4a85-4d1a-85e0-eb609b83ffbb", 00:08:57.556 "is_configured": true, 00:08:57.556 "data_offset": 2048, 00:08:57.556 "data_size": 63488 00:08:57.556 }, 00:08:57.556 { 00:08:57.556 "name": "BaseBdev3", 00:08:57.556 "uuid": "4988e76d-e993-4d21-9f83-d4f54af1404e", 00:08:57.556 "is_configured": true, 00:08:57.556 "data_offset": 2048, 00:08:57.556 "data_size": 63488 00:08:57.556 } 00:08:57.556 ] 00:08:57.556 }' 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.556 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.816 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:57.816 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:57.816 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:57.816 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.816 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.816 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.816 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.816 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:57.816 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:57.816 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:57.816 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.816 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.816 [2024-11-28 16:21:49.534395] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:57.816 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.816 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:57.816 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:57.816 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.816 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:57.816 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.816 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.816 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.075 [2024-11-28 16:21:49.605339] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:58.075 [2024-11-28 16:21:49.605499] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.075 [2024-11-28 16:21:49.616928] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.075 [2024-11-28 16:21:49.617046] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:58.075 [2024-11-28 16:21:49.617089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.075 BaseBdev2 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.075 [ 00:08:58.075 { 00:08:58.075 "name": "BaseBdev2", 00:08:58.075 "aliases": [ 00:08:58.075 "0f74cbe8-6cdc-4f69-90ee-a75bd24ed34e" 00:08:58.075 ], 00:08:58.075 "product_name": "Malloc disk", 00:08:58.075 "block_size": 512, 00:08:58.075 "num_blocks": 65536, 00:08:58.075 "uuid": "0f74cbe8-6cdc-4f69-90ee-a75bd24ed34e", 00:08:58.075 "assigned_rate_limits": { 00:08:58.075 "rw_ios_per_sec": 0, 00:08:58.075 "rw_mbytes_per_sec": 0, 00:08:58.075 "r_mbytes_per_sec": 0, 00:08:58.075 "w_mbytes_per_sec": 0 00:08:58.075 }, 00:08:58.075 "claimed": false, 00:08:58.075 "zoned": false, 00:08:58.075 "supported_io_types": { 00:08:58.075 "read": true, 00:08:58.075 "write": true, 00:08:58.075 "unmap": true, 00:08:58.075 "flush": true, 00:08:58.075 "reset": true, 00:08:58.075 "nvme_admin": false, 00:08:58.075 "nvme_io": false, 00:08:58.075 "nvme_io_md": false, 00:08:58.075 "write_zeroes": true, 00:08:58.075 "zcopy": true, 00:08:58.075 "get_zone_info": false, 00:08:58.075 "zone_management": false, 00:08:58.075 "zone_append": false, 00:08:58.075 "compare": false, 00:08:58.075 "compare_and_write": false, 00:08:58.075 "abort": true, 00:08:58.075 "seek_hole": false, 00:08:58.075 "seek_data": false, 00:08:58.075 "copy": true, 00:08:58.075 "nvme_iov_md": false 00:08:58.075 }, 00:08:58.075 "memory_domains": [ 00:08:58.075 { 00:08:58.075 "dma_device_id": "system", 00:08:58.075 "dma_device_type": 1 00:08:58.075 }, 00:08:58.075 { 00:08:58.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.075 "dma_device_type": 2 00:08:58.075 } 00:08:58.075 ], 00:08:58.075 "driver_specific": {} 00:08:58.075 } 00:08:58.075 ] 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.075 BaseBdev3 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.075 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.075 [ 00:08:58.075 { 00:08:58.075 "name": "BaseBdev3", 00:08:58.075 "aliases": [ 00:08:58.075 "0ed8a191-62ce-4d4c-ab36-3cb594d639c2" 00:08:58.075 ], 00:08:58.075 "product_name": "Malloc disk", 00:08:58.075 "block_size": 512, 00:08:58.075 "num_blocks": 65536, 00:08:58.075 "uuid": "0ed8a191-62ce-4d4c-ab36-3cb594d639c2", 00:08:58.075 "assigned_rate_limits": { 00:08:58.075 "rw_ios_per_sec": 0, 00:08:58.075 "rw_mbytes_per_sec": 0, 00:08:58.075 "r_mbytes_per_sec": 0, 00:08:58.075 "w_mbytes_per_sec": 0 00:08:58.075 }, 00:08:58.075 "claimed": false, 00:08:58.075 "zoned": false, 00:08:58.075 "supported_io_types": { 00:08:58.075 "read": true, 00:08:58.075 "write": true, 00:08:58.075 "unmap": true, 00:08:58.075 "flush": true, 00:08:58.075 "reset": true, 00:08:58.075 "nvme_admin": false, 00:08:58.075 "nvme_io": false, 00:08:58.075 "nvme_io_md": false, 00:08:58.075 "write_zeroes": true, 00:08:58.075 "zcopy": true, 00:08:58.075 "get_zone_info": false, 00:08:58.075 "zone_management": false, 00:08:58.075 "zone_append": false, 00:08:58.075 "compare": false, 00:08:58.075 "compare_and_write": false, 00:08:58.075 "abort": true, 00:08:58.075 "seek_hole": false, 00:08:58.075 "seek_data": false, 00:08:58.075 "copy": true, 00:08:58.075 "nvme_iov_md": false 00:08:58.075 }, 00:08:58.075 "memory_domains": [ 00:08:58.075 { 00:08:58.075 "dma_device_id": "system", 00:08:58.075 "dma_device_type": 1 00:08:58.075 }, 00:08:58.075 { 00:08:58.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.075 "dma_device_type": 2 00:08:58.075 } 00:08:58.076 ], 00:08:58.076 "driver_specific": {} 00:08:58.076 } 00:08:58.076 ] 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.076 [2024-11-28 16:21:49.783964] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.076 [2024-11-28 16:21:49.784020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.076 [2024-11-28 16:21:49.784040] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.076 [2024-11-28 16:21:49.785796] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.076 "name": "Existed_Raid", 00:08:58.076 "uuid": "914b3a0a-5437-40da-9e2c-1f4c0035cc6e", 00:08:58.076 "strip_size_kb": 0, 00:08:58.076 "state": "configuring", 00:08:58.076 "raid_level": "raid1", 00:08:58.076 "superblock": true, 00:08:58.076 "num_base_bdevs": 3, 00:08:58.076 "num_base_bdevs_discovered": 2, 00:08:58.076 "num_base_bdevs_operational": 3, 00:08:58.076 "base_bdevs_list": [ 00:08:58.076 { 00:08:58.076 "name": "BaseBdev1", 00:08:58.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.076 "is_configured": false, 00:08:58.076 "data_offset": 0, 00:08:58.076 "data_size": 0 00:08:58.076 }, 00:08:58.076 { 00:08:58.076 "name": "BaseBdev2", 00:08:58.076 "uuid": "0f74cbe8-6cdc-4f69-90ee-a75bd24ed34e", 00:08:58.076 "is_configured": true, 00:08:58.076 "data_offset": 2048, 00:08:58.076 "data_size": 63488 00:08:58.076 }, 00:08:58.076 { 00:08:58.076 "name": "BaseBdev3", 00:08:58.076 "uuid": "0ed8a191-62ce-4d4c-ab36-3cb594d639c2", 00:08:58.076 "is_configured": true, 00:08:58.076 "data_offset": 2048, 00:08:58.076 "data_size": 63488 00:08:58.076 } 00:08:58.076 ] 00:08:58.076 }' 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.076 16:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.648 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:58.648 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.648 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.648 [2024-11-28 16:21:50.195668] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:58.649 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.649 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:58.649 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.649 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.649 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.649 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.649 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.649 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.649 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.649 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.649 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.649 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.649 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.649 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.649 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.649 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.649 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.649 "name": "Existed_Raid", 00:08:58.649 "uuid": "914b3a0a-5437-40da-9e2c-1f4c0035cc6e", 00:08:58.649 "strip_size_kb": 0, 00:08:58.649 "state": "configuring", 00:08:58.649 "raid_level": "raid1", 00:08:58.649 "superblock": true, 00:08:58.649 "num_base_bdevs": 3, 00:08:58.649 "num_base_bdevs_discovered": 1, 00:08:58.649 "num_base_bdevs_operational": 3, 00:08:58.649 "base_bdevs_list": [ 00:08:58.649 { 00:08:58.649 "name": "BaseBdev1", 00:08:58.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.649 "is_configured": false, 00:08:58.649 "data_offset": 0, 00:08:58.649 "data_size": 0 00:08:58.649 }, 00:08:58.649 { 00:08:58.649 "name": null, 00:08:58.649 "uuid": "0f74cbe8-6cdc-4f69-90ee-a75bd24ed34e", 00:08:58.649 "is_configured": false, 00:08:58.649 "data_offset": 0, 00:08:58.649 "data_size": 63488 00:08:58.649 }, 00:08:58.649 { 00:08:58.649 "name": "BaseBdev3", 00:08:58.649 "uuid": "0ed8a191-62ce-4d4c-ab36-3cb594d639c2", 00:08:58.649 "is_configured": true, 00:08:58.649 "data_offset": 2048, 00:08:58.649 "data_size": 63488 00:08:58.649 } 00:08:58.649 ] 00:08:58.649 }' 00:08:58.649 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.649 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.906 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.906 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:58.906 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.906 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.906 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.906 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:58.906 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:58.906 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.906 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.906 [2024-11-28 16:21:50.673731] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.906 BaseBdev1 00:08:58.906 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.906 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:58.906 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:58.906 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:58.906 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.166 [ 00:08:59.166 { 00:08:59.166 "name": "BaseBdev1", 00:08:59.166 "aliases": [ 00:08:59.166 "aafc6dee-bc25-4e6d-a361-9a53d800dd6d" 00:08:59.166 ], 00:08:59.166 "product_name": "Malloc disk", 00:08:59.166 "block_size": 512, 00:08:59.166 "num_blocks": 65536, 00:08:59.166 "uuid": "aafc6dee-bc25-4e6d-a361-9a53d800dd6d", 00:08:59.166 "assigned_rate_limits": { 00:08:59.166 "rw_ios_per_sec": 0, 00:08:59.166 "rw_mbytes_per_sec": 0, 00:08:59.166 "r_mbytes_per_sec": 0, 00:08:59.166 "w_mbytes_per_sec": 0 00:08:59.166 }, 00:08:59.166 "claimed": true, 00:08:59.166 "claim_type": "exclusive_write", 00:08:59.166 "zoned": false, 00:08:59.166 "supported_io_types": { 00:08:59.166 "read": true, 00:08:59.166 "write": true, 00:08:59.166 "unmap": true, 00:08:59.166 "flush": true, 00:08:59.166 "reset": true, 00:08:59.166 "nvme_admin": false, 00:08:59.166 "nvme_io": false, 00:08:59.166 "nvme_io_md": false, 00:08:59.166 "write_zeroes": true, 00:08:59.166 "zcopy": true, 00:08:59.166 "get_zone_info": false, 00:08:59.166 "zone_management": false, 00:08:59.166 "zone_append": false, 00:08:59.166 "compare": false, 00:08:59.166 "compare_and_write": false, 00:08:59.166 "abort": true, 00:08:59.166 "seek_hole": false, 00:08:59.166 "seek_data": false, 00:08:59.166 "copy": true, 00:08:59.166 "nvme_iov_md": false 00:08:59.166 }, 00:08:59.166 "memory_domains": [ 00:08:59.166 { 00:08:59.166 "dma_device_id": "system", 00:08:59.166 "dma_device_type": 1 00:08:59.166 }, 00:08:59.166 { 00:08:59.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.166 "dma_device_type": 2 00:08:59.166 } 00:08:59.166 ], 00:08:59.166 "driver_specific": {} 00:08:59.166 } 00:08:59.166 ] 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.166 "name": "Existed_Raid", 00:08:59.166 "uuid": "914b3a0a-5437-40da-9e2c-1f4c0035cc6e", 00:08:59.166 "strip_size_kb": 0, 00:08:59.166 "state": "configuring", 00:08:59.166 "raid_level": "raid1", 00:08:59.166 "superblock": true, 00:08:59.166 "num_base_bdevs": 3, 00:08:59.166 "num_base_bdevs_discovered": 2, 00:08:59.166 "num_base_bdevs_operational": 3, 00:08:59.166 "base_bdevs_list": [ 00:08:59.166 { 00:08:59.166 "name": "BaseBdev1", 00:08:59.166 "uuid": "aafc6dee-bc25-4e6d-a361-9a53d800dd6d", 00:08:59.166 "is_configured": true, 00:08:59.166 "data_offset": 2048, 00:08:59.166 "data_size": 63488 00:08:59.166 }, 00:08:59.166 { 00:08:59.166 "name": null, 00:08:59.166 "uuid": "0f74cbe8-6cdc-4f69-90ee-a75bd24ed34e", 00:08:59.166 "is_configured": false, 00:08:59.166 "data_offset": 0, 00:08:59.166 "data_size": 63488 00:08:59.166 }, 00:08:59.166 { 00:08:59.166 "name": "BaseBdev3", 00:08:59.166 "uuid": "0ed8a191-62ce-4d4c-ab36-3cb594d639c2", 00:08:59.166 "is_configured": true, 00:08:59.166 "data_offset": 2048, 00:08:59.166 "data_size": 63488 00:08:59.166 } 00:08:59.166 ] 00:08:59.166 }' 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.166 16:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.425 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.425 16:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.425 16:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.425 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:59.425 16:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.425 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:59.425 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:59.425 16:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.425 16:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.425 [2024-11-28 16:21:51.180910] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:59.425 16:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.425 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:59.425 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.425 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.425 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:59.425 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:59.425 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.425 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.425 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.426 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.426 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.426 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.426 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.426 16:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.426 16:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.685 16:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.685 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.685 "name": "Existed_Raid", 00:08:59.685 "uuid": "914b3a0a-5437-40da-9e2c-1f4c0035cc6e", 00:08:59.685 "strip_size_kb": 0, 00:08:59.685 "state": "configuring", 00:08:59.685 "raid_level": "raid1", 00:08:59.685 "superblock": true, 00:08:59.685 "num_base_bdevs": 3, 00:08:59.685 "num_base_bdevs_discovered": 1, 00:08:59.685 "num_base_bdevs_operational": 3, 00:08:59.685 "base_bdevs_list": [ 00:08:59.685 { 00:08:59.685 "name": "BaseBdev1", 00:08:59.685 "uuid": "aafc6dee-bc25-4e6d-a361-9a53d800dd6d", 00:08:59.685 "is_configured": true, 00:08:59.685 "data_offset": 2048, 00:08:59.685 "data_size": 63488 00:08:59.685 }, 00:08:59.685 { 00:08:59.685 "name": null, 00:08:59.685 "uuid": "0f74cbe8-6cdc-4f69-90ee-a75bd24ed34e", 00:08:59.685 "is_configured": false, 00:08:59.685 "data_offset": 0, 00:08:59.685 "data_size": 63488 00:08:59.685 }, 00:08:59.685 { 00:08:59.685 "name": null, 00:08:59.685 "uuid": "0ed8a191-62ce-4d4c-ab36-3cb594d639c2", 00:08:59.685 "is_configured": false, 00:08:59.685 "data_offset": 0, 00:08:59.685 "data_size": 63488 00:08:59.685 } 00:08:59.685 ] 00:08:59.685 }' 00:08:59.685 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.686 16:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.946 [2024-11-28 16:21:51.648135] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.946 "name": "Existed_Raid", 00:08:59.946 "uuid": "914b3a0a-5437-40da-9e2c-1f4c0035cc6e", 00:08:59.946 "strip_size_kb": 0, 00:08:59.946 "state": "configuring", 00:08:59.946 "raid_level": "raid1", 00:08:59.946 "superblock": true, 00:08:59.946 "num_base_bdevs": 3, 00:08:59.946 "num_base_bdevs_discovered": 2, 00:08:59.946 "num_base_bdevs_operational": 3, 00:08:59.946 "base_bdevs_list": [ 00:08:59.946 { 00:08:59.946 "name": "BaseBdev1", 00:08:59.946 "uuid": "aafc6dee-bc25-4e6d-a361-9a53d800dd6d", 00:08:59.946 "is_configured": true, 00:08:59.946 "data_offset": 2048, 00:08:59.946 "data_size": 63488 00:08:59.946 }, 00:08:59.946 { 00:08:59.946 "name": null, 00:08:59.946 "uuid": "0f74cbe8-6cdc-4f69-90ee-a75bd24ed34e", 00:08:59.946 "is_configured": false, 00:08:59.946 "data_offset": 0, 00:08:59.946 "data_size": 63488 00:08:59.946 }, 00:08:59.946 { 00:08:59.946 "name": "BaseBdev3", 00:08:59.946 "uuid": "0ed8a191-62ce-4d4c-ab36-3cb594d639c2", 00:08:59.946 "is_configured": true, 00:08:59.946 "data_offset": 2048, 00:08:59.946 "data_size": 63488 00:08:59.946 } 00:08:59.946 ] 00:08:59.946 }' 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.946 16:21:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.516 [2024-11-28 16:21:52.147378] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.516 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.516 "name": "Existed_Raid", 00:09:00.516 "uuid": "914b3a0a-5437-40da-9e2c-1f4c0035cc6e", 00:09:00.516 "strip_size_kb": 0, 00:09:00.516 "state": "configuring", 00:09:00.516 "raid_level": "raid1", 00:09:00.516 "superblock": true, 00:09:00.516 "num_base_bdevs": 3, 00:09:00.516 "num_base_bdevs_discovered": 1, 00:09:00.516 "num_base_bdevs_operational": 3, 00:09:00.516 "base_bdevs_list": [ 00:09:00.516 { 00:09:00.516 "name": null, 00:09:00.516 "uuid": "aafc6dee-bc25-4e6d-a361-9a53d800dd6d", 00:09:00.516 "is_configured": false, 00:09:00.516 "data_offset": 0, 00:09:00.516 "data_size": 63488 00:09:00.516 }, 00:09:00.516 { 00:09:00.516 "name": null, 00:09:00.516 "uuid": "0f74cbe8-6cdc-4f69-90ee-a75bd24ed34e", 00:09:00.516 "is_configured": false, 00:09:00.516 "data_offset": 0, 00:09:00.516 "data_size": 63488 00:09:00.517 }, 00:09:00.517 { 00:09:00.517 "name": "BaseBdev3", 00:09:00.517 "uuid": "0ed8a191-62ce-4d4c-ab36-3cb594d639c2", 00:09:00.517 "is_configured": true, 00:09:00.517 "data_offset": 2048, 00:09:00.517 "data_size": 63488 00:09:00.517 } 00:09:00.517 ] 00:09:00.517 }' 00:09:00.517 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.517 16:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.085 [2024-11-28 16:21:52.660929] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.085 "name": "Existed_Raid", 00:09:01.085 "uuid": "914b3a0a-5437-40da-9e2c-1f4c0035cc6e", 00:09:01.085 "strip_size_kb": 0, 00:09:01.085 "state": "configuring", 00:09:01.085 "raid_level": "raid1", 00:09:01.085 "superblock": true, 00:09:01.085 "num_base_bdevs": 3, 00:09:01.085 "num_base_bdevs_discovered": 2, 00:09:01.085 "num_base_bdevs_operational": 3, 00:09:01.085 "base_bdevs_list": [ 00:09:01.085 { 00:09:01.085 "name": null, 00:09:01.085 "uuid": "aafc6dee-bc25-4e6d-a361-9a53d800dd6d", 00:09:01.085 "is_configured": false, 00:09:01.085 "data_offset": 0, 00:09:01.085 "data_size": 63488 00:09:01.085 }, 00:09:01.085 { 00:09:01.085 "name": "BaseBdev2", 00:09:01.085 "uuid": "0f74cbe8-6cdc-4f69-90ee-a75bd24ed34e", 00:09:01.085 "is_configured": true, 00:09:01.085 "data_offset": 2048, 00:09:01.085 "data_size": 63488 00:09:01.085 }, 00:09:01.085 { 00:09:01.085 "name": "BaseBdev3", 00:09:01.085 "uuid": "0ed8a191-62ce-4d4c-ab36-3cb594d639c2", 00:09:01.085 "is_configured": true, 00:09:01.085 "data_offset": 2048, 00:09:01.085 "data_size": 63488 00:09:01.085 } 00:09:01.085 ] 00:09:01.085 }' 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.085 16:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u aafc6dee-bc25-4e6d-a361-9a53d800dd6d 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.655 [2024-11-28 16:21:53.246639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:01.655 NewBaseBdev 00:09:01.655 [2024-11-28 16:21:53.246887] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:01.655 [2024-11-28 16:21:53.246903] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:01.655 [2024-11-28 16:21:53.247139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:01.655 [2024-11-28 16:21:53.247267] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:01.655 [2024-11-28 16:21:53.247279] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:01.655 [2024-11-28 16:21:53.247370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.655 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.655 [ 00:09:01.655 { 00:09:01.655 "name": "NewBaseBdev", 00:09:01.655 "aliases": [ 00:09:01.655 "aafc6dee-bc25-4e6d-a361-9a53d800dd6d" 00:09:01.655 ], 00:09:01.655 "product_name": "Malloc disk", 00:09:01.655 "block_size": 512, 00:09:01.655 "num_blocks": 65536, 00:09:01.655 "uuid": "aafc6dee-bc25-4e6d-a361-9a53d800dd6d", 00:09:01.655 "assigned_rate_limits": { 00:09:01.655 "rw_ios_per_sec": 0, 00:09:01.655 "rw_mbytes_per_sec": 0, 00:09:01.655 "r_mbytes_per_sec": 0, 00:09:01.655 "w_mbytes_per_sec": 0 00:09:01.655 }, 00:09:01.655 "claimed": true, 00:09:01.655 "claim_type": "exclusive_write", 00:09:01.655 "zoned": false, 00:09:01.655 "supported_io_types": { 00:09:01.655 "read": true, 00:09:01.655 "write": true, 00:09:01.655 "unmap": true, 00:09:01.655 "flush": true, 00:09:01.655 "reset": true, 00:09:01.655 "nvme_admin": false, 00:09:01.655 "nvme_io": false, 00:09:01.655 "nvme_io_md": false, 00:09:01.655 "write_zeroes": true, 00:09:01.655 "zcopy": true, 00:09:01.655 "get_zone_info": false, 00:09:01.655 "zone_management": false, 00:09:01.655 "zone_append": false, 00:09:01.655 "compare": false, 00:09:01.655 "compare_and_write": false, 00:09:01.655 "abort": true, 00:09:01.655 "seek_hole": false, 00:09:01.655 "seek_data": false, 00:09:01.655 "copy": true, 00:09:01.655 "nvme_iov_md": false 00:09:01.655 }, 00:09:01.655 "memory_domains": [ 00:09:01.655 { 00:09:01.655 "dma_device_id": "system", 00:09:01.655 "dma_device_type": 1 00:09:01.655 }, 00:09:01.655 { 00:09:01.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.655 "dma_device_type": 2 00:09:01.655 } 00:09:01.655 ], 00:09:01.655 "driver_specific": {} 00:09:01.656 } 00:09:01.656 ] 00:09:01.656 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.656 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:01.656 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:01.656 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.656 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.656 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.656 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.656 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.656 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.656 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.656 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.656 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.656 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.656 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.656 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.656 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.656 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.656 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.656 "name": "Existed_Raid", 00:09:01.656 "uuid": "914b3a0a-5437-40da-9e2c-1f4c0035cc6e", 00:09:01.656 "strip_size_kb": 0, 00:09:01.656 "state": "online", 00:09:01.656 "raid_level": "raid1", 00:09:01.656 "superblock": true, 00:09:01.656 "num_base_bdevs": 3, 00:09:01.656 "num_base_bdevs_discovered": 3, 00:09:01.656 "num_base_bdevs_operational": 3, 00:09:01.656 "base_bdevs_list": [ 00:09:01.656 { 00:09:01.656 "name": "NewBaseBdev", 00:09:01.656 "uuid": "aafc6dee-bc25-4e6d-a361-9a53d800dd6d", 00:09:01.656 "is_configured": true, 00:09:01.656 "data_offset": 2048, 00:09:01.656 "data_size": 63488 00:09:01.656 }, 00:09:01.656 { 00:09:01.656 "name": "BaseBdev2", 00:09:01.656 "uuid": "0f74cbe8-6cdc-4f69-90ee-a75bd24ed34e", 00:09:01.656 "is_configured": true, 00:09:01.656 "data_offset": 2048, 00:09:01.656 "data_size": 63488 00:09:01.656 }, 00:09:01.656 { 00:09:01.656 "name": "BaseBdev3", 00:09:01.656 "uuid": "0ed8a191-62ce-4d4c-ab36-3cb594d639c2", 00:09:01.656 "is_configured": true, 00:09:01.656 "data_offset": 2048, 00:09:01.656 "data_size": 63488 00:09:01.656 } 00:09:01.656 ] 00:09:01.656 }' 00:09:01.656 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.656 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:02.225 [2024-11-28 16:21:53.766119] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:02.225 "name": "Existed_Raid", 00:09:02.225 "aliases": [ 00:09:02.225 "914b3a0a-5437-40da-9e2c-1f4c0035cc6e" 00:09:02.225 ], 00:09:02.225 "product_name": "Raid Volume", 00:09:02.225 "block_size": 512, 00:09:02.225 "num_blocks": 63488, 00:09:02.225 "uuid": "914b3a0a-5437-40da-9e2c-1f4c0035cc6e", 00:09:02.225 "assigned_rate_limits": { 00:09:02.225 "rw_ios_per_sec": 0, 00:09:02.225 "rw_mbytes_per_sec": 0, 00:09:02.225 "r_mbytes_per_sec": 0, 00:09:02.225 "w_mbytes_per_sec": 0 00:09:02.225 }, 00:09:02.225 "claimed": false, 00:09:02.225 "zoned": false, 00:09:02.225 "supported_io_types": { 00:09:02.225 "read": true, 00:09:02.225 "write": true, 00:09:02.225 "unmap": false, 00:09:02.225 "flush": false, 00:09:02.225 "reset": true, 00:09:02.225 "nvme_admin": false, 00:09:02.225 "nvme_io": false, 00:09:02.225 "nvme_io_md": false, 00:09:02.225 "write_zeroes": true, 00:09:02.225 "zcopy": false, 00:09:02.225 "get_zone_info": false, 00:09:02.225 "zone_management": false, 00:09:02.225 "zone_append": false, 00:09:02.225 "compare": false, 00:09:02.225 "compare_and_write": false, 00:09:02.225 "abort": false, 00:09:02.225 "seek_hole": false, 00:09:02.225 "seek_data": false, 00:09:02.225 "copy": false, 00:09:02.225 "nvme_iov_md": false 00:09:02.225 }, 00:09:02.225 "memory_domains": [ 00:09:02.225 { 00:09:02.225 "dma_device_id": "system", 00:09:02.225 "dma_device_type": 1 00:09:02.225 }, 00:09:02.225 { 00:09:02.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.225 "dma_device_type": 2 00:09:02.225 }, 00:09:02.225 { 00:09:02.225 "dma_device_id": "system", 00:09:02.225 "dma_device_type": 1 00:09:02.225 }, 00:09:02.225 { 00:09:02.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.225 "dma_device_type": 2 00:09:02.225 }, 00:09:02.225 { 00:09:02.225 "dma_device_id": "system", 00:09:02.225 "dma_device_type": 1 00:09:02.225 }, 00:09:02.225 { 00:09:02.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.225 "dma_device_type": 2 00:09:02.225 } 00:09:02.225 ], 00:09:02.225 "driver_specific": { 00:09:02.225 "raid": { 00:09:02.225 "uuid": "914b3a0a-5437-40da-9e2c-1f4c0035cc6e", 00:09:02.225 "strip_size_kb": 0, 00:09:02.225 "state": "online", 00:09:02.225 "raid_level": "raid1", 00:09:02.225 "superblock": true, 00:09:02.225 "num_base_bdevs": 3, 00:09:02.225 "num_base_bdevs_discovered": 3, 00:09:02.225 "num_base_bdevs_operational": 3, 00:09:02.225 "base_bdevs_list": [ 00:09:02.225 { 00:09:02.225 "name": "NewBaseBdev", 00:09:02.225 "uuid": "aafc6dee-bc25-4e6d-a361-9a53d800dd6d", 00:09:02.225 "is_configured": true, 00:09:02.225 "data_offset": 2048, 00:09:02.225 "data_size": 63488 00:09:02.225 }, 00:09:02.225 { 00:09:02.225 "name": "BaseBdev2", 00:09:02.225 "uuid": "0f74cbe8-6cdc-4f69-90ee-a75bd24ed34e", 00:09:02.225 "is_configured": true, 00:09:02.225 "data_offset": 2048, 00:09:02.225 "data_size": 63488 00:09:02.225 }, 00:09:02.225 { 00:09:02.225 "name": "BaseBdev3", 00:09:02.225 "uuid": "0ed8a191-62ce-4d4c-ab36-3cb594d639c2", 00:09:02.225 "is_configured": true, 00:09:02.225 "data_offset": 2048, 00:09:02.225 "data_size": 63488 00:09:02.225 } 00:09:02.225 ] 00:09:02.225 } 00:09:02.225 } 00:09:02.225 }' 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:02.225 BaseBdev2 00:09:02.225 BaseBdev3' 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.225 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:02.226 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.226 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.226 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.226 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.226 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.226 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.226 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.226 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:02.226 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.226 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.486 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.486 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.486 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.486 16:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:02.486 16:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.486 16:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.486 [2024-11-28 16:21:54.005419] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:02.486 [2024-11-28 16:21:54.005496] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:02.486 [2024-11-28 16:21:54.005562] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.486 [2024-11-28 16:21:54.005796] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:02.486 [2024-11-28 16:21:54.005806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:02.486 16:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.486 16:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 79080 00:09:02.486 16:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 79080 ']' 00:09:02.486 16:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 79080 00:09:02.486 16:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:02.486 16:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:02.486 16:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79080 00:09:02.486 16:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:02.486 16:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:02.486 16:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79080' 00:09:02.486 killing process with pid 79080 00:09:02.486 16:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 79080 00:09:02.486 [2024-11-28 16:21:54.056257] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:02.486 16:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 79080 00:09:02.486 [2024-11-28 16:21:54.086919] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:02.745 16:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:02.745 00:09:02.745 real 0m8.761s 00:09:02.745 user 0m14.972s 00:09:02.745 sys 0m1.787s 00:09:02.745 ************************************ 00:09:02.745 END TEST raid_state_function_test_sb 00:09:02.745 ************************************ 00:09:02.745 16:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.745 16:21:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.745 16:21:54 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:02.745 16:21:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:02.745 16:21:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.745 16:21:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:02.745 ************************************ 00:09:02.745 START TEST raid_superblock_test 00:09:02.745 ************************************ 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79685 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79685 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 79685 ']' 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:02.745 16:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.745 [2024-11-28 16:21:54.488977] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:02.745 [2024-11-28 16:21:54.489202] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79685 ] 00:09:03.005 [2024-11-28 16:21:54.650112] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.005 [2024-11-28 16:21:54.693723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.005 [2024-11-28 16:21:54.735141] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.005 [2024-11-28 16:21:54.735188] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.574 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:03.574 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:03.574 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:03.574 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:03.574 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:03.574 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:03.574 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:03.574 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:03.574 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:03.574 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:03.574 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:03.574 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.574 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.835 malloc1 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.835 [2024-11-28 16:21:55.352857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:03.835 [2024-11-28 16:21:55.353023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.835 [2024-11-28 16:21:55.353069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:03.835 [2024-11-28 16:21:55.353132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.835 [2024-11-28 16:21:55.355338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.835 [2024-11-28 16:21:55.355438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:03.835 pt1 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.835 malloc2 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.835 [2024-11-28 16:21:55.403803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:03.835 [2024-11-28 16:21:55.404050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.835 [2024-11-28 16:21:55.404132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:03.835 [2024-11-28 16:21:55.404214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.835 [2024-11-28 16:21:55.409036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.835 [2024-11-28 16:21:55.409188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:03.835 pt2 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.835 malloc3 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.835 [2024-11-28 16:21:55.434874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:03.835 [2024-11-28 16:21:55.434979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.835 [2024-11-28 16:21:55.435030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:03.835 [2024-11-28 16:21:55.435059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.835 [2024-11-28 16:21:55.437128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.835 [2024-11-28 16:21:55.437200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:03.835 pt3 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.835 [2024-11-28 16:21:55.446904] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:03.835 [2024-11-28 16:21:55.448762] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:03.835 [2024-11-28 16:21:55.448884] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:03.835 [2024-11-28 16:21:55.449041] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:03.835 [2024-11-28 16:21:55.449085] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:03.835 [2024-11-28 16:21:55.449357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:03.835 [2024-11-28 16:21:55.449530] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:03.835 [2024-11-28 16:21:55.449575] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:03.835 [2024-11-28 16:21:55.449730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:03.835 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:03.836 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.836 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.836 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.836 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.836 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.836 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.836 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.836 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.836 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.836 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.836 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.836 "name": "raid_bdev1", 00:09:03.836 "uuid": "41f1f513-0d66-498b-a5eb-b1e3ef7718c0", 00:09:03.836 "strip_size_kb": 0, 00:09:03.836 "state": "online", 00:09:03.836 "raid_level": "raid1", 00:09:03.836 "superblock": true, 00:09:03.836 "num_base_bdevs": 3, 00:09:03.836 "num_base_bdevs_discovered": 3, 00:09:03.836 "num_base_bdevs_operational": 3, 00:09:03.836 "base_bdevs_list": [ 00:09:03.836 { 00:09:03.836 "name": "pt1", 00:09:03.836 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:03.836 "is_configured": true, 00:09:03.836 "data_offset": 2048, 00:09:03.836 "data_size": 63488 00:09:03.836 }, 00:09:03.836 { 00:09:03.836 "name": "pt2", 00:09:03.836 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:03.836 "is_configured": true, 00:09:03.836 "data_offset": 2048, 00:09:03.836 "data_size": 63488 00:09:03.836 }, 00:09:03.836 { 00:09:03.836 "name": "pt3", 00:09:03.836 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:03.836 "is_configured": true, 00:09:03.836 "data_offset": 2048, 00:09:03.836 "data_size": 63488 00:09:03.836 } 00:09:03.836 ] 00:09:03.836 }' 00:09:03.836 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.836 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.418 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:04.418 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:04.418 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:04.418 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:04.418 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:04.418 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:04.418 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:04.418 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.418 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.418 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:04.418 [2024-11-28 16:21:55.894376] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.418 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.418 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:04.418 "name": "raid_bdev1", 00:09:04.418 "aliases": [ 00:09:04.418 "41f1f513-0d66-498b-a5eb-b1e3ef7718c0" 00:09:04.418 ], 00:09:04.418 "product_name": "Raid Volume", 00:09:04.418 "block_size": 512, 00:09:04.418 "num_blocks": 63488, 00:09:04.418 "uuid": "41f1f513-0d66-498b-a5eb-b1e3ef7718c0", 00:09:04.418 "assigned_rate_limits": { 00:09:04.418 "rw_ios_per_sec": 0, 00:09:04.418 "rw_mbytes_per_sec": 0, 00:09:04.418 "r_mbytes_per_sec": 0, 00:09:04.418 "w_mbytes_per_sec": 0 00:09:04.418 }, 00:09:04.418 "claimed": false, 00:09:04.418 "zoned": false, 00:09:04.418 "supported_io_types": { 00:09:04.418 "read": true, 00:09:04.418 "write": true, 00:09:04.418 "unmap": false, 00:09:04.418 "flush": false, 00:09:04.418 "reset": true, 00:09:04.418 "nvme_admin": false, 00:09:04.418 "nvme_io": false, 00:09:04.418 "nvme_io_md": false, 00:09:04.418 "write_zeroes": true, 00:09:04.418 "zcopy": false, 00:09:04.418 "get_zone_info": false, 00:09:04.418 "zone_management": false, 00:09:04.418 "zone_append": false, 00:09:04.418 "compare": false, 00:09:04.418 "compare_and_write": false, 00:09:04.418 "abort": false, 00:09:04.418 "seek_hole": false, 00:09:04.418 "seek_data": false, 00:09:04.418 "copy": false, 00:09:04.418 "nvme_iov_md": false 00:09:04.418 }, 00:09:04.418 "memory_domains": [ 00:09:04.418 { 00:09:04.418 "dma_device_id": "system", 00:09:04.418 "dma_device_type": 1 00:09:04.418 }, 00:09:04.418 { 00:09:04.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.418 "dma_device_type": 2 00:09:04.418 }, 00:09:04.418 { 00:09:04.418 "dma_device_id": "system", 00:09:04.418 "dma_device_type": 1 00:09:04.418 }, 00:09:04.418 { 00:09:04.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.418 "dma_device_type": 2 00:09:04.418 }, 00:09:04.418 { 00:09:04.418 "dma_device_id": "system", 00:09:04.418 "dma_device_type": 1 00:09:04.418 }, 00:09:04.418 { 00:09:04.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.418 "dma_device_type": 2 00:09:04.418 } 00:09:04.418 ], 00:09:04.418 "driver_specific": { 00:09:04.418 "raid": { 00:09:04.418 "uuid": "41f1f513-0d66-498b-a5eb-b1e3ef7718c0", 00:09:04.418 "strip_size_kb": 0, 00:09:04.418 "state": "online", 00:09:04.418 "raid_level": "raid1", 00:09:04.418 "superblock": true, 00:09:04.418 "num_base_bdevs": 3, 00:09:04.418 "num_base_bdevs_discovered": 3, 00:09:04.418 "num_base_bdevs_operational": 3, 00:09:04.418 "base_bdevs_list": [ 00:09:04.418 { 00:09:04.418 "name": "pt1", 00:09:04.418 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:04.418 "is_configured": true, 00:09:04.418 "data_offset": 2048, 00:09:04.418 "data_size": 63488 00:09:04.418 }, 00:09:04.418 { 00:09:04.418 "name": "pt2", 00:09:04.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:04.418 "is_configured": true, 00:09:04.418 "data_offset": 2048, 00:09:04.418 "data_size": 63488 00:09:04.418 }, 00:09:04.418 { 00:09:04.418 "name": "pt3", 00:09:04.418 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:04.418 "is_configured": true, 00:09:04.418 "data_offset": 2048, 00:09:04.418 "data_size": 63488 00:09:04.418 } 00:09:04.418 ] 00:09:04.418 } 00:09:04.418 } 00:09:04.418 }' 00:09:04.418 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:04.418 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:04.418 pt2 00:09:04.418 pt3' 00:09:04.419 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.419 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:04.419 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.419 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:04.419 16:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.419 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.419 16:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:04.419 [2024-11-28 16:21:56.153891] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.419 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=41f1f513-0d66-498b-a5eb-b1e3ef7718c0 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 41f1f513-0d66-498b-a5eb-b1e3ef7718c0 ']' 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.679 [2024-11-28 16:21:56.197544] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:04.679 [2024-11-28 16:21:56.197568] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:04.679 [2024-11-28 16:21:56.197637] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.679 [2024-11-28 16:21:56.197708] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:04.679 [2024-11-28 16:21:56.197720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.679 [2024-11-28 16:21:56.349301] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:04.679 [2024-11-28 16:21:56.351143] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:04.679 [2024-11-28 16:21:56.351185] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:04.679 [2024-11-28 16:21:56.351231] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:04.679 [2024-11-28 16:21:56.351281] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:04.679 [2024-11-28 16:21:56.351320] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:04.679 [2024-11-28 16:21:56.351331] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:04.679 [2024-11-28 16:21:56.351341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:04.679 request: 00:09:04.679 { 00:09:04.679 "name": "raid_bdev1", 00:09:04.679 "raid_level": "raid1", 00:09:04.679 "base_bdevs": [ 00:09:04.679 "malloc1", 00:09:04.679 "malloc2", 00:09:04.679 "malloc3" 00:09:04.679 ], 00:09:04.679 "superblock": false, 00:09:04.679 "method": "bdev_raid_create", 00:09:04.679 "req_id": 1 00:09:04.679 } 00:09:04.679 Got JSON-RPC error response 00:09:04.679 response: 00:09:04.679 { 00:09:04.679 "code": -17, 00:09:04.679 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:04.679 } 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.679 [2024-11-28 16:21:56.409165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:04.679 [2024-11-28 16:21:56.409266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.679 [2024-11-28 16:21:56.409302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:04.679 [2024-11-28 16:21:56.409332] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.679 [2024-11-28 16:21:56.411365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.679 [2024-11-28 16:21:56.411434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:04.679 [2024-11-28 16:21:56.411529] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:04.679 [2024-11-28 16:21:56.411591] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:04.679 pt1 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.679 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.941 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.941 "name": "raid_bdev1", 00:09:04.941 "uuid": "41f1f513-0d66-498b-a5eb-b1e3ef7718c0", 00:09:04.941 "strip_size_kb": 0, 00:09:04.941 "state": "configuring", 00:09:04.941 "raid_level": "raid1", 00:09:04.941 "superblock": true, 00:09:04.941 "num_base_bdevs": 3, 00:09:04.941 "num_base_bdevs_discovered": 1, 00:09:04.941 "num_base_bdevs_operational": 3, 00:09:04.941 "base_bdevs_list": [ 00:09:04.941 { 00:09:04.941 "name": "pt1", 00:09:04.941 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:04.941 "is_configured": true, 00:09:04.941 "data_offset": 2048, 00:09:04.941 "data_size": 63488 00:09:04.941 }, 00:09:04.941 { 00:09:04.941 "name": null, 00:09:04.941 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:04.941 "is_configured": false, 00:09:04.941 "data_offset": 2048, 00:09:04.941 "data_size": 63488 00:09:04.941 }, 00:09:04.941 { 00:09:04.941 "name": null, 00:09:04.941 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:04.941 "is_configured": false, 00:09:04.941 "data_offset": 2048, 00:09:04.941 "data_size": 63488 00:09:04.941 } 00:09:04.941 ] 00:09:04.941 }' 00:09:04.941 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.941 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.202 [2024-11-28 16:21:56.756623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:05.202 [2024-11-28 16:21:56.756677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.202 [2024-11-28 16:21:56.756712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:05.202 [2024-11-28 16:21:56.756724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.202 [2024-11-28 16:21:56.757092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.202 [2024-11-28 16:21:56.757110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:05.202 [2024-11-28 16:21:56.757167] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:05.202 [2024-11-28 16:21:56.757194] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:05.202 pt2 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.202 [2024-11-28 16:21:56.768628] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.202 "name": "raid_bdev1", 00:09:05.202 "uuid": "41f1f513-0d66-498b-a5eb-b1e3ef7718c0", 00:09:05.202 "strip_size_kb": 0, 00:09:05.202 "state": "configuring", 00:09:05.202 "raid_level": "raid1", 00:09:05.202 "superblock": true, 00:09:05.202 "num_base_bdevs": 3, 00:09:05.202 "num_base_bdevs_discovered": 1, 00:09:05.202 "num_base_bdevs_operational": 3, 00:09:05.202 "base_bdevs_list": [ 00:09:05.202 { 00:09:05.202 "name": "pt1", 00:09:05.202 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:05.202 "is_configured": true, 00:09:05.202 "data_offset": 2048, 00:09:05.202 "data_size": 63488 00:09:05.202 }, 00:09:05.202 { 00:09:05.202 "name": null, 00:09:05.202 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:05.202 "is_configured": false, 00:09:05.202 "data_offset": 0, 00:09:05.202 "data_size": 63488 00:09:05.202 }, 00:09:05.202 { 00:09:05.202 "name": null, 00:09:05.202 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:05.202 "is_configured": false, 00:09:05.202 "data_offset": 2048, 00:09:05.202 "data_size": 63488 00:09:05.202 } 00:09:05.202 ] 00:09:05.202 }' 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.202 16:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.462 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:05.462 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:05.462 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:05.462 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.462 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.462 [2024-11-28 16:21:57.219867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:05.462 [2024-11-28 16:21:57.220001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.462 [2024-11-28 16:21:57.220038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:05.462 [2024-11-28 16:21:57.220066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.462 [2024-11-28 16:21:57.220454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.462 [2024-11-28 16:21:57.220508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:05.462 [2024-11-28 16:21:57.220604] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:05.462 [2024-11-28 16:21:57.220659] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:05.463 pt2 00:09:05.463 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.463 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:05.463 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:05.463 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:05.463 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.463 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.463 [2024-11-28 16:21:57.231817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:05.723 [2024-11-28 16:21:57.231918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.723 [2024-11-28 16:21:57.231940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:05.723 [2024-11-28 16:21:57.231949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.723 [2024-11-28 16:21:57.232256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.723 [2024-11-28 16:21:57.232271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:05.723 [2024-11-28 16:21:57.232326] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:05.723 [2024-11-28 16:21:57.232342] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:05.723 [2024-11-28 16:21:57.232429] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:05.723 [2024-11-28 16:21:57.232437] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:05.723 [2024-11-28 16:21:57.232648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:05.723 [2024-11-28 16:21:57.232759] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:05.723 [2024-11-28 16:21:57.232771] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:05.723 [2024-11-28 16:21:57.232877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.723 pt3 00:09:05.723 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.723 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:05.723 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:05.723 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:05.723 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.723 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.723 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:05.723 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:05.723 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.723 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.723 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.723 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.723 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.723 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.723 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.723 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.723 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.723 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.723 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.723 "name": "raid_bdev1", 00:09:05.723 "uuid": "41f1f513-0d66-498b-a5eb-b1e3ef7718c0", 00:09:05.723 "strip_size_kb": 0, 00:09:05.723 "state": "online", 00:09:05.723 "raid_level": "raid1", 00:09:05.723 "superblock": true, 00:09:05.723 "num_base_bdevs": 3, 00:09:05.723 "num_base_bdevs_discovered": 3, 00:09:05.723 "num_base_bdevs_operational": 3, 00:09:05.723 "base_bdevs_list": [ 00:09:05.723 { 00:09:05.723 "name": "pt1", 00:09:05.723 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:05.723 "is_configured": true, 00:09:05.723 "data_offset": 2048, 00:09:05.723 "data_size": 63488 00:09:05.723 }, 00:09:05.723 { 00:09:05.723 "name": "pt2", 00:09:05.723 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:05.723 "is_configured": true, 00:09:05.723 "data_offset": 2048, 00:09:05.723 "data_size": 63488 00:09:05.723 }, 00:09:05.723 { 00:09:05.723 "name": "pt3", 00:09:05.723 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:05.723 "is_configured": true, 00:09:05.723 "data_offset": 2048, 00:09:05.723 "data_size": 63488 00:09:05.723 } 00:09:05.723 ] 00:09:05.723 }' 00:09:05.723 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.723 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.983 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:05.983 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:05.983 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:05.983 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:05.983 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:05.983 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:05.983 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:05.983 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:05.983 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.983 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.983 [2024-11-28 16:21:57.667388] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.983 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.983 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:05.983 "name": "raid_bdev1", 00:09:05.983 "aliases": [ 00:09:05.983 "41f1f513-0d66-498b-a5eb-b1e3ef7718c0" 00:09:05.983 ], 00:09:05.983 "product_name": "Raid Volume", 00:09:05.983 "block_size": 512, 00:09:05.983 "num_blocks": 63488, 00:09:05.983 "uuid": "41f1f513-0d66-498b-a5eb-b1e3ef7718c0", 00:09:05.983 "assigned_rate_limits": { 00:09:05.983 "rw_ios_per_sec": 0, 00:09:05.983 "rw_mbytes_per_sec": 0, 00:09:05.983 "r_mbytes_per_sec": 0, 00:09:05.983 "w_mbytes_per_sec": 0 00:09:05.983 }, 00:09:05.983 "claimed": false, 00:09:05.983 "zoned": false, 00:09:05.983 "supported_io_types": { 00:09:05.983 "read": true, 00:09:05.983 "write": true, 00:09:05.983 "unmap": false, 00:09:05.983 "flush": false, 00:09:05.983 "reset": true, 00:09:05.983 "nvme_admin": false, 00:09:05.983 "nvme_io": false, 00:09:05.983 "nvme_io_md": false, 00:09:05.983 "write_zeroes": true, 00:09:05.983 "zcopy": false, 00:09:05.983 "get_zone_info": false, 00:09:05.983 "zone_management": false, 00:09:05.983 "zone_append": false, 00:09:05.983 "compare": false, 00:09:05.983 "compare_and_write": false, 00:09:05.983 "abort": false, 00:09:05.983 "seek_hole": false, 00:09:05.983 "seek_data": false, 00:09:05.983 "copy": false, 00:09:05.983 "nvme_iov_md": false 00:09:05.983 }, 00:09:05.983 "memory_domains": [ 00:09:05.983 { 00:09:05.983 "dma_device_id": "system", 00:09:05.983 "dma_device_type": 1 00:09:05.983 }, 00:09:05.983 { 00:09:05.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.983 "dma_device_type": 2 00:09:05.983 }, 00:09:05.983 { 00:09:05.983 "dma_device_id": "system", 00:09:05.983 "dma_device_type": 1 00:09:05.983 }, 00:09:05.983 { 00:09:05.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.983 "dma_device_type": 2 00:09:05.983 }, 00:09:05.983 { 00:09:05.983 "dma_device_id": "system", 00:09:05.983 "dma_device_type": 1 00:09:05.983 }, 00:09:05.983 { 00:09:05.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.983 "dma_device_type": 2 00:09:05.983 } 00:09:05.983 ], 00:09:05.983 "driver_specific": { 00:09:05.983 "raid": { 00:09:05.983 "uuid": "41f1f513-0d66-498b-a5eb-b1e3ef7718c0", 00:09:05.983 "strip_size_kb": 0, 00:09:05.983 "state": "online", 00:09:05.983 "raid_level": "raid1", 00:09:05.983 "superblock": true, 00:09:05.983 "num_base_bdevs": 3, 00:09:05.983 "num_base_bdevs_discovered": 3, 00:09:05.983 "num_base_bdevs_operational": 3, 00:09:05.983 "base_bdevs_list": [ 00:09:05.983 { 00:09:05.983 "name": "pt1", 00:09:05.983 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:05.983 "is_configured": true, 00:09:05.983 "data_offset": 2048, 00:09:05.983 "data_size": 63488 00:09:05.983 }, 00:09:05.983 { 00:09:05.983 "name": "pt2", 00:09:05.983 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:05.983 "is_configured": true, 00:09:05.983 "data_offset": 2048, 00:09:05.983 "data_size": 63488 00:09:05.983 }, 00:09:05.983 { 00:09:05.983 "name": "pt3", 00:09:05.983 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:05.983 "is_configured": true, 00:09:05.983 "data_offset": 2048, 00:09:05.983 "data_size": 63488 00:09:05.983 } 00:09:05.983 ] 00:09:05.983 } 00:09:05.983 } 00:09:05.983 }' 00:09:05.983 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:05.983 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:05.983 pt2 00:09:05.983 pt3' 00:09:05.983 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.243 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:06.244 [2024-11-28 16:21:57.918927] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 41f1f513-0d66-498b-a5eb-b1e3ef7718c0 '!=' 41f1f513-0d66-498b-a5eb-b1e3ef7718c0 ']' 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.244 [2024-11-28 16:21:57.962617] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.244 16:21:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.505 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.505 "name": "raid_bdev1", 00:09:06.505 "uuid": "41f1f513-0d66-498b-a5eb-b1e3ef7718c0", 00:09:06.505 "strip_size_kb": 0, 00:09:06.505 "state": "online", 00:09:06.505 "raid_level": "raid1", 00:09:06.505 "superblock": true, 00:09:06.505 "num_base_bdevs": 3, 00:09:06.505 "num_base_bdevs_discovered": 2, 00:09:06.505 "num_base_bdevs_operational": 2, 00:09:06.505 "base_bdevs_list": [ 00:09:06.505 { 00:09:06.505 "name": null, 00:09:06.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.505 "is_configured": false, 00:09:06.505 "data_offset": 0, 00:09:06.505 "data_size": 63488 00:09:06.505 }, 00:09:06.505 { 00:09:06.505 "name": "pt2", 00:09:06.505 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:06.505 "is_configured": true, 00:09:06.505 "data_offset": 2048, 00:09:06.505 "data_size": 63488 00:09:06.505 }, 00:09:06.505 { 00:09:06.505 "name": "pt3", 00:09:06.505 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:06.505 "is_configured": true, 00:09:06.505 "data_offset": 2048, 00:09:06.505 "data_size": 63488 00:09:06.505 } 00:09:06.505 ] 00:09:06.505 }' 00:09:06.505 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.505 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.765 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:06.765 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.765 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.765 [2024-11-28 16:21:58.445780] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:06.765 [2024-11-28 16:21:58.445896] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:06.765 [2024-11-28 16:21:58.445978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.765 [2024-11-28 16:21:58.446036] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:06.765 [2024-11-28 16:21:58.446045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.766 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.026 [2024-11-28 16:21:58.537594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:07.026 [2024-11-28 16:21:58.537689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.026 [2024-11-28 16:21:58.537712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:07.026 [2024-11-28 16:21:58.537720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.026 [2024-11-28 16:21:58.539769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.026 [2024-11-28 16:21:58.539804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:07.026 [2024-11-28 16:21:58.539882] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:07.026 [2024-11-28 16:21:58.539912] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:07.026 pt2 00:09:07.026 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.026 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:07.026 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.026 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.026 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.026 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.026 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:07.026 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.026 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.026 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.026 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.026 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.026 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.026 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.026 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.026 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.026 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.026 "name": "raid_bdev1", 00:09:07.026 "uuid": "41f1f513-0d66-498b-a5eb-b1e3ef7718c0", 00:09:07.026 "strip_size_kb": 0, 00:09:07.026 "state": "configuring", 00:09:07.026 "raid_level": "raid1", 00:09:07.026 "superblock": true, 00:09:07.026 "num_base_bdevs": 3, 00:09:07.026 "num_base_bdevs_discovered": 1, 00:09:07.026 "num_base_bdevs_operational": 2, 00:09:07.026 "base_bdevs_list": [ 00:09:07.026 { 00:09:07.026 "name": null, 00:09:07.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.026 "is_configured": false, 00:09:07.026 "data_offset": 2048, 00:09:07.026 "data_size": 63488 00:09:07.026 }, 00:09:07.026 { 00:09:07.026 "name": "pt2", 00:09:07.026 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:07.026 "is_configured": true, 00:09:07.026 "data_offset": 2048, 00:09:07.026 "data_size": 63488 00:09:07.027 }, 00:09:07.027 { 00:09:07.027 "name": null, 00:09:07.027 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:07.027 "is_configured": false, 00:09:07.027 "data_offset": 2048, 00:09:07.027 "data_size": 63488 00:09:07.027 } 00:09:07.027 ] 00:09:07.027 }' 00:09:07.027 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.027 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.287 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:07.287 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:07.287 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:07.287 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:07.287 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.287 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.287 [2024-11-28 16:21:58.988889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:07.287 [2024-11-28 16:21:58.989035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.288 [2024-11-28 16:21:58.989076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:07.288 [2024-11-28 16:21:58.989103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.288 [2024-11-28 16:21:58.989510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.288 [2024-11-28 16:21:58.989566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:07.288 [2024-11-28 16:21:58.989668] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:07.288 [2024-11-28 16:21:58.989717] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:07.288 [2024-11-28 16:21:58.989823] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:07.288 [2024-11-28 16:21:58.989871] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:07.288 [2024-11-28 16:21:58.990129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:07.288 [2024-11-28 16:21:58.990278] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:07.288 [2024-11-28 16:21:58.990318] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:07.288 [2024-11-28 16:21:58.990458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.288 pt3 00:09:07.288 16:21:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.288 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:07.288 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.288 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.288 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.288 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.288 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:07.288 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.288 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.288 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.288 16:21:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.288 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.288 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.288 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.288 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.288 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.288 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.288 "name": "raid_bdev1", 00:09:07.288 "uuid": "41f1f513-0d66-498b-a5eb-b1e3ef7718c0", 00:09:07.288 "strip_size_kb": 0, 00:09:07.288 "state": "online", 00:09:07.288 "raid_level": "raid1", 00:09:07.288 "superblock": true, 00:09:07.288 "num_base_bdevs": 3, 00:09:07.288 "num_base_bdevs_discovered": 2, 00:09:07.288 "num_base_bdevs_operational": 2, 00:09:07.288 "base_bdevs_list": [ 00:09:07.288 { 00:09:07.288 "name": null, 00:09:07.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.288 "is_configured": false, 00:09:07.288 "data_offset": 2048, 00:09:07.288 "data_size": 63488 00:09:07.288 }, 00:09:07.288 { 00:09:07.288 "name": "pt2", 00:09:07.288 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:07.288 "is_configured": true, 00:09:07.288 "data_offset": 2048, 00:09:07.288 "data_size": 63488 00:09:07.288 }, 00:09:07.288 { 00:09:07.288 "name": "pt3", 00:09:07.288 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:07.288 "is_configured": true, 00:09:07.288 "data_offset": 2048, 00:09:07.288 "data_size": 63488 00:09:07.288 } 00:09:07.288 ] 00:09:07.288 }' 00:09:07.288 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.288 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.885 [2024-11-28 16:21:59.392156] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:07.885 [2024-11-28 16:21:59.392185] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.885 [2024-11-28 16:21:59.392255] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.885 [2024-11-28 16:21:59.392311] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.885 [2024-11-28 16:21:59.392322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.885 [2024-11-28 16:21:59.464020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:07.885 [2024-11-28 16:21:59.464080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.885 [2024-11-28 16:21:59.464096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:07.885 [2024-11-28 16:21:59.464107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.885 [2024-11-28 16:21:59.466177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.885 [2024-11-28 16:21:59.466214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:07.885 [2024-11-28 16:21:59.466280] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:07.885 [2024-11-28 16:21:59.466317] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:07.885 [2024-11-28 16:21:59.466410] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:07.885 [2024-11-28 16:21:59.466425] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:07.885 [2024-11-28 16:21:59.466442] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:09:07.885 [2024-11-28 16:21:59.466478] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:07.885 pt1 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.885 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.886 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.886 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.886 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.886 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.886 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.886 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.886 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.886 "name": "raid_bdev1", 00:09:07.886 "uuid": "41f1f513-0d66-498b-a5eb-b1e3ef7718c0", 00:09:07.886 "strip_size_kb": 0, 00:09:07.886 "state": "configuring", 00:09:07.886 "raid_level": "raid1", 00:09:07.886 "superblock": true, 00:09:07.886 "num_base_bdevs": 3, 00:09:07.886 "num_base_bdevs_discovered": 1, 00:09:07.886 "num_base_bdevs_operational": 2, 00:09:07.886 "base_bdevs_list": [ 00:09:07.886 { 00:09:07.886 "name": null, 00:09:07.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.886 "is_configured": false, 00:09:07.886 "data_offset": 2048, 00:09:07.886 "data_size": 63488 00:09:07.886 }, 00:09:07.886 { 00:09:07.886 "name": "pt2", 00:09:07.886 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:07.886 "is_configured": true, 00:09:07.886 "data_offset": 2048, 00:09:07.886 "data_size": 63488 00:09:07.886 }, 00:09:07.886 { 00:09:07.886 "name": null, 00:09:07.886 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:07.886 "is_configured": false, 00:09:07.886 "data_offset": 2048, 00:09:07.886 "data_size": 63488 00:09:07.886 } 00:09:07.886 ] 00:09:07.886 }' 00:09:07.886 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.886 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.456 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:08.456 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:08.456 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.456 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.456 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.456 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:08.456 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:08.456 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.456 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.456 [2024-11-28 16:21:59.987140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:08.456 [2024-11-28 16:21:59.987256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.456 [2024-11-28 16:21:59.987301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:08.456 [2024-11-28 16:21:59.987338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.456 [2024-11-28 16:21:59.987762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.456 [2024-11-28 16:21:59.987839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:08.457 [2024-11-28 16:21:59.987956] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:08.457 [2024-11-28 16:21:59.988030] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:08.457 [2024-11-28 16:21:59.988164] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:09:08.457 [2024-11-28 16:21:59.988203] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:08.457 [2024-11-28 16:21:59.988435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:08.457 [2024-11-28 16:21:59.988594] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:09:08.457 [2024-11-28 16:21:59.988633] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:09:08.457 [2024-11-28 16:21:59.988772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.457 pt3 00:09:08.457 16:21:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.457 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:08.457 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:08.457 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.457 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:08.457 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:08.457 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:08.457 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.457 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.457 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.457 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.457 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:08.457 16:21:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.457 16:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.457 16:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.457 16:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.457 16:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.457 "name": "raid_bdev1", 00:09:08.457 "uuid": "41f1f513-0d66-498b-a5eb-b1e3ef7718c0", 00:09:08.457 "strip_size_kb": 0, 00:09:08.457 "state": "online", 00:09:08.457 "raid_level": "raid1", 00:09:08.457 "superblock": true, 00:09:08.457 "num_base_bdevs": 3, 00:09:08.457 "num_base_bdevs_discovered": 2, 00:09:08.457 "num_base_bdevs_operational": 2, 00:09:08.457 "base_bdevs_list": [ 00:09:08.457 { 00:09:08.457 "name": null, 00:09:08.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.457 "is_configured": false, 00:09:08.457 "data_offset": 2048, 00:09:08.457 "data_size": 63488 00:09:08.457 }, 00:09:08.457 { 00:09:08.457 "name": "pt2", 00:09:08.457 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:08.457 "is_configured": true, 00:09:08.457 "data_offset": 2048, 00:09:08.457 "data_size": 63488 00:09:08.457 }, 00:09:08.457 { 00:09:08.457 "name": "pt3", 00:09:08.457 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:08.457 "is_configured": true, 00:09:08.457 "data_offset": 2048, 00:09:08.457 "data_size": 63488 00:09:08.457 } 00:09:08.457 ] 00:09:08.457 }' 00:09:08.457 16:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.457 16:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.717 16:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:08.717 16:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:08.717 16:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.717 16:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.717 16:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.717 16:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:08.717 16:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:08.717 16:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.717 16:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:08.717 16:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.717 [2024-11-28 16:22:00.478536] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.979 16:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.979 16:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 41f1f513-0d66-498b-a5eb-b1e3ef7718c0 '!=' 41f1f513-0d66-498b-a5eb-b1e3ef7718c0 ']' 00:09:08.979 16:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79685 00:09:08.979 16:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 79685 ']' 00:09:08.979 16:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 79685 00:09:08.979 16:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:08.979 16:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:08.979 16:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79685 00:09:08.979 16:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:08.979 16:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:08.979 16:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79685' 00:09:08.979 killing process with pid 79685 00:09:08.979 16:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 79685 00:09:08.979 [2024-11-28 16:22:00.549449] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:08.979 [2024-11-28 16:22:00.549588] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.979 16:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 79685 00:09:08.979 [2024-11-28 16:22:00.549676] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:08.979 [2024-11-28 16:22:00.549688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:09:08.979 [2024-11-28 16:22:00.582915] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:09.253 16:22:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:09.253 00:09:09.253 real 0m6.420s 00:09:09.253 user 0m10.726s 00:09:09.253 sys 0m1.347s 00:09:09.253 16:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.253 16:22:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.253 ************************************ 00:09:09.253 END TEST raid_superblock_test 00:09:09.253 ************************************ 00:09:09.253 16:22:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:09.253 16:22:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:09.253 16:22:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.253 16:22:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:09.253 ************************************ 00:09:09.253 START TEST raid_read_error_test 00:09:09.253 ************************************ 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XnQXekN3Mn 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80114 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80114 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 80114 ']' 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:09.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:09.253 16:22:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.253 [2024-11-28 16:22:00.993825] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:09.253 [2024-11-28 16:22:00.993981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80114 ] 00:09:09.531 [2024-11-28 16:22:01.154673] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.531 [2024-11-28 16:22:01.200505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.531 [2024-11-28 16:22:01.241625] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.531 [2024-11-28 16:22:01.241660] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.101 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:10.101 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:10.101 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.101 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:10.101 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.101 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.101 BaseBdev1_malloc 00:09:10.101 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.101 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:10.101 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.101 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.101 true 00:09:10.101 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.101 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:10.101 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.101 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.101 [2024-11-28 16:22:01.859189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:10.101 [2024-11-28 16:22:01.859258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.101 [2024-11-28 16:22:01.859280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:10.101 [2024-11-28 16:22:01.859289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.101 [2024-11-28 16:22:01.861346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.101 [2024-11-28 16:22:01.861471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:10.101 BaseBdev1 00:09:10.101 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.101 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.101 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:10.101 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.101 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.361 BaseBdev2_malloc 00:09:10.361 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.361 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:10.361 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.361 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.361 true 00:09:10.361 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.361 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:10.361 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.361 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.362 [2024-11-28 16:22:01.909272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:10.362 [2024-11-28 16:22:01.909417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.362 [2024-11-28 16:22:01.909441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:10.362 [2024-11-28 16:22:01.909450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.362 [2024-11-28 16:22:01.911436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.362 [2024-11-28 16:22:01.911473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:10.362 BaseBdev2 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.362 BaseBdev3_malloc 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.362 true 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.362 [2024-11-28 16:22:01.949649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:10.362 [2024-11-28 16:22:01.949706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.362 [2024-11-28 16:22:01.949724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:10.362 [2024-11-28 16:22:01.949732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.362 [2024-11-28 16:22:01.951745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.362 [2024-11-28 16:22:01.951781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:10.362 BaseBdev3 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.362 [2024-11-28 16:22:01.961692] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.362 [2024-11-28 16:22:01.963458] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.362 [2024-11-28 16:22:01.963607] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.362 [2024-11-28 16:22:01.963798] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:10.362 [2024-11-28 16:22:01.963816] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:10.362 [2024-11-28 16:22:01.964057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:10.362 [2024-11-28 16:22:01.964199] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:10.362 [2024-11-28 16:22:01.964208] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:10.362 [2024-11-28 16:22:01.964345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.362 16:22:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.362 16:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.362 "name": "raid_bdev1", 00:09:10.362 "uuid": "24954a4c-ee11-41fb-801a-5d5f6391bbff", 00:09:10.362 "strip_size_kb": 0, 00:09:10.362 "state": "online", 00:09:10.362 "raid_level": "raid1", 00:09:10.362 "superblock": true, 00:09:10.362 "num_base_bdevs": 3, 00:09:10.362 "num_base_bdevs_discovered": 3, 00:09:10.362 "num_base_bdevs_operational": 3, 00:09:10.362 "base_bdevs_list": [ 00:09:10.362 { 00:09:10.362 "name": "BaseBdev1", 00:09:10.362 "uuid": "13a0e426-6906-5dff-9c34-820466fa8d0d", 00:09:10.362 "is_configured": true, 00:09:10.362 "data_offset": 2048, 00:09:10.362 "data_size": 63488 00:09:10.362 }, 00:09:10.362 { 00:09:10.362 "name": "BaseBdev2", 00:09:10.362 "uuid": "45271078-67b6-5a99-b9fe-2f74ab8d4e58", 00:09:10.362 "is_configured": true, 00:09:10.362 "data_offset": 2048, 00:09:10.362 "data_size": 63488 00:09:10.362 }, 00:09:10.362 { 00:09:10.362 "name": "BaseBdev3", 00:09:10.362 "uuid": "ea6d0ebc-35d6-51af-8b34-0916b09b8b87", 00:09:10.362 "is_configured": true, 00:09:10.362 "data_offset": 2048, 00:09:10.362 "data_size": 63488 00:09:10.362 } 00:09:10.362 ] 00:09:10.362 }' 00:09:10.362 16:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.362 16:22:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.930 16:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:10.930 16:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:10.930 [2024-11-28 16:22:02.509243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.870 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.871 "name": "raid_bdev1", 00:09:11.871 "uuid": "24954a4c-ee11-41fb-801a-5d5f6391bbff", 00:09:11.871 "strip_size_kb": 0, 00:09:11.871 "state": "online", 00:09:11.871 "raid_level": "raid1", 00:09:11.871 "superblock": true, 00:09:11.871 "num_base_bdevs": 3, 00:09:11.871 "num_base_bdevs_discovered": 3, 00:09:11.871 "num_base_bdevs_operational": 3, 00:09:11.871 "base_bdevs_list": [ 00:09:11.871 { 00:09:11.871 "name": "BaseBdev1", 00:09:11.871 "uuid": "13a0e426-6906-5dff-9c34-820466fa8d0d", 00:09:11.871 "is_configured": true, 00:09:11.871 "data_offset": 2048, 00:09:11.871 "data_size": 63488 00:09:11.871 }, 00:09:11.871 { 00:09:11.871 "name": "BaseBdev2", 00:09:11.871 "uuid": "45271078-67b6-5a99-b9fe-2f74ab8d4e58", 00:09:11.871 "is_configured": true, 00:09:11.871 "data_offset": 2048, 00:09:11.871 "data_size": 63488 00:09:11.871 }, 00:09:11.871 { 00:09:11.871 "name": "BaseBdev3", 00:09:11.871 "uuid": "ea6d0ebc-35d6-51af-8b34-0916b09b8b87", 00:09:11.871 "is_configured": true, 00:09:11.871 "data_offset": 2048, 00:09:11.871 "data_size": 63488 00:09:11.871 } 00:09:11.871 ] 00:09:11.871 }' 00:09:11.871 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.871 16:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.131 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:12.131 16:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.131 16:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.131 [2024-11-28 16:22:03.859711] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:12.131 [2024-11-28 16:22:03.859748] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.131 [2024-11-28 16:22:03.862158] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.131 [2024-11-28 16:22:03.862210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.131 [2024-11-28 16:22:03.862306] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.131 [2024-11-28 16:22:03.862319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:12.131 { 00:09:12.131 "results": [ 00:09:12.131 { 00:09:12.131 "job": "raid_bdev1", 00:09:12.131 "core_mask": "0x1", 00:09:12.131 "workload": "randrw", 00:09:12.131 "percentage": 50, 00:09:12.131 "status": "finished", 00:09:12.131 "queue_depth": 1, 00:09:12.131 "io_size": 131072, 00:09:12.131 "runtime": 1.351302, 00:09:12.131 "iops": 14922.644974994486, 00:09:12.131 "mibps": 1865.3306218743107, 00:09:12.131 "io_failed": 0, 00:09:12.131 "io_timeout": 0, 00:09:12.131 "avg_latency_us": 64.53363627799908, 00:09:12.131 "min_latency_us": 22.134497816593885, 00:09:12.131 "max_latency_us": 1459.5353711790392 00:09:12.131 } 00:09:12.131 ], 00:09:12.131 "core_count": 1 00:09:12.131 } 00:09:12.131 16:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.131 16:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80114 00:09:12.131 16:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 80114 ']' 00:09:12.131 16:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 80114 00:09:12.131 16:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:12.131 16:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:12.131 16:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80114 00:09:12.390 16:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:12.390 killing process with pid 80114 00:09:12.391 16:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:12.391 16:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80114' 00:09:12.391 16:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 80114 00:09:12.391 [2024-11-28 16:22:03.910626] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:12.391 16:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 80114 00:09:12.391 [2024-11-28 16:22:03.935529] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.651 16:22:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XnQXekN3Mn 00:09:12.651 16:22:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:12.651 16:22:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:12.651 16:22:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:12.651 16:22:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:12.651 16:22:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:12.651 16:22:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:12.651 16:22:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:12.651 00:09:12.651 real 0m3.283s 00:09:12.651 user 0m4.147s 00:09:12.651 sys 0m0.538s 00:09:12.651 16:22:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.651 16:22:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.651 ************************************ 00:09:12.651 END TEST raid_read_error_test 00:09:12.651 ************************************ 00:09:12.651 16:22:04 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:12.651 16:22:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:12.651 16:22:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.651 16:22:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:12.651 ************************************ 00:09:12.651 START TEST raid_write_error_test 00:09:12.651 ************************************ 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Roh58BUEi8 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80243 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80243 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 80243 ']' 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.651 16:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.652 [2024-11-28 16:22:04.350663] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:12.652 [2024-11-28 16:22:04.350901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80243 ] 00:09:12.912 [2024-11-28 16:22:04.509236] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.912 [2024-11-28 16:22:04.553473] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.912 [2024-11-28 16:22:04.594651] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.912 [2024-11-28 16:22:04.594769] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.482 BaseBdev1_malloc 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.482 true 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.482 [2024-11-28 16:22:05.199938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:13.482 [2024-11-28 16:22:05.200001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.482 [2024-11-28 16:22:05.200020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:13.482 [2024-11-28 16:22:05.200029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.482 [2024-11-28 16:22:05.202102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.482 [2024-11-28 16:22:05.202140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:13.482 BaseBdev1 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.482 BaseBdev2_malloc 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.482 true 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.482 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.482 [2024-11-28 16:22:05.250251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:13.482 [2024-11-28 16:22:05.250387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.482 [2024-11-28 16:22:05.250410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:13.482 [2024-11-28 16:22:05.250421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.753 [2024-11-28 16:22:05.252643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.753 [2024-11-28 16:22:05.252675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:13.753 BaseBdev2 00:09:13.753 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.753 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:13.753 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:13.753 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.753 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.753 BaseBdev3_malloc 00:09:13.753 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.753 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:13.753 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.753 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.753 true 00:09:13.753 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.753 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:13.753 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.753 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.753 [2024-11-28 16:22:05.290445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:13.753 [2024-11-28 16:22:05.290488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.753 [2024-11-28 16:22:05.290505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:13.753 [2024-11-28 16:22:05.290513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.753 [2024-11-28 16:22:05.292483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.753 [2024-11-28 16:22:05.292514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:13.753 BaseBdev3 00:09:13.753 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.753 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:13.753 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.753 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.753 [2024-11-28 16:22:05.302489] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.753 [2024-11-28 16:22:05.304240] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:13.753 [2024-11-28 16:22:05.304324] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:13.753 [2024-11-28 16:22:05.304490] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:13.753 [2024-11-28 16:22:05.304512] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:13.753 [2024-11-28 16:22:05.304735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:13.754 [2024-11-28 16:22:05.304910] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:13.754 [2024-11-28 16:22:05.304927] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:13.754 [2024-11-28 16:22:05.305046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.754 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.754 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:13.754 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:13.754 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:13.754 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.754 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.754 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.754 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.754 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.754 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.754 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.754 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.754 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:13.754 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.754 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.754 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.754 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.754 "name": "raid_bdev1", 00:09:13.754 "uuid": "4dbba6eb-62c2-49e1-abbd-40801ed29bd6", 00:09:13.754 "strip_size_kb": 0, 00:09:13.754 "state": "online", 00:09:13.754 "raid_level": "raid1", 00:09:13.754 "superblock": true, 00:09:13.754 "num_base_bdevs": 3, 00:09:13.754 "num_base_bdevs_discovered": 3, 00:09:13.754 "num_base_bdevs_operational": 3, 00:09:13.754 "base_bdevs_list": [ 00:09:13.754 { 00:09:13.754 "name": "BaseBdev1", 00:09:13.754 "uuid": "e1a2da5c-e9ca-5eca-88de-02406ad66605", 00:09:13.754 "is_configured": true, 00:09:13.754 "data_offset": 2048, 00:09:13.754 "data_size": 63488 00:09:13.754 }, 00:09:13.754 { 00:09:13.754 "name": "BaseBdev2", 00:09:13.754 "uuid": "8b321862-ea58-55b4-aab3-8c15aae2ca8c", 00:09:13.754 "is_configured": true, 00:09:13.754 "data_offset": 2048, 00:09:13.754 "data_size": 63488 00:09:13.754 }, 00:09:13.754 { 00:09:13.754 "name": "BaseBdev3", 00:09:13.754 "uuid": "5ae96cf2-820a-5758-a2cf-ff1fc2044c25", 00:09:13.754 "is_configured": true, 00:09:13.754 "data_offset": 2048, 00:09:13.754 "data_size": 63488 00:09:13.754 } 00:09:13.754 ] 00:09:13.754 }' 00:09:13.754 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.754 16:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.014 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:14.014 16:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:14.274 [2024-11-28 16:22:05.822003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.214 [2024-11-28 16:22:06.745112] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:15.214 [2024-11-28 16:22:06.745172] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:15.214 [2024-11-28 16:22:06.745384] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.214 "name": "raid_bdev1", 00:09:15.214 "uuid": "4dbba6eb-62c2-49e1-abbd-40801ed29bd6", 00:09:15.214 "strip_size_kb": 0, 00:09:15.214 "state": "online", 00:09:15.214 "raid_level": "raid1", 00:09:15.214 "superblock": true, 00:09:15.214 "num_base_bdevs": 3, 00:09:15.214 "num_base_bdevs_discovered": 2, 00:09:15.214 "num_base_bdevs_operational": 2, 00:09:15.214 "base_bdevs_list": [ 00:09:15.214 { 00:09:15.214 "name": null, 00:09:15.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.214 "is_configured": false, 00:09:15.214 "data_offset": 0, 00:09:15.214 "data_size": 63488 00:09:15.214 }, 00:09:15.214 { 00:09:15.214 "name": "BaseBdev2", 00:09:15.214 "uuid": "8b321862-ea58-55b4-aab3-8c15aae2ca8c", 00:09:15.214 "is_configured": true, 00:09:15.214 "data_offset": 2048, 00:09:15.214 "data_size": 63488 00:09:15.214 }, 00:09:15.214 { 00:09:15.214 "name": "BaseBdev3", 00:09:15.214 "uuid": "5ae96cf2-820a-5758-a2cf-ff1fc2044c25", 00:09:15.214 "is_configured": true, 00:09:15.214 "data_offset": 2048, 00:09:15.214 "data_size": 63488 00:09:15.214 } 00:09:15.214 ] 00:09:15.214 }' 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.214 16:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.474 16:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:15.474 16:22:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.474 16:22:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.474 [2024-11-28 16:22:07.226909] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:15.474 [2024-11-28 16:22:07.226950] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:15.474 [2024-11-28 16:22:07.229390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.474 [2024-11-28 16:22:07.229452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.474 [2024-11-28 16:22:07.229530] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:15.474 [2024-11-28 16:22:07.229540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:15.474 { 00:09:15.474 "results": [ 00:09:15.474 { 00:09:15.474 "job": "raid_bdev1", 00:09:15.474 "core_mask": "0x1", 00:09:15.474 "workload": "randrw", 00:09:15.474 "percentage": 50, 00:09:15.474 "status": "finished", 00:09:15.474 "queue_depth": 1, 00:09:15.474 "io_size": 131072, 00:09:15.474 "runtime": 1.405883, 00:09:15.474 "iops": 16839.950408391025, 00:09:15.474 "mibps": 2104.993801048878, 00:09:15.474 "io_failed": 0, 00:09:15.474 "io_timeout": 0, 00:09:15.474 "avg_latency_us": 56.914344779884075, 00:09:15.474 "min_latency_us": 21.575545851528386, 00:09:15.474 "max_latency_us": 1380.8349344978167 00:09:15.474 } 00:09:15.474 ], 00:09:15.474 "core_count": 1 00:09:15.474 } 00:09:15.474 16:22:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.474 16:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80243 00:09:15.474 16:22:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 80243 ']' 00:09:15.474 16:22:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 80243 00:09:15.474 16:22:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:15.474 16:22:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:15.474 16:22:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80243 00:09:15.734 16:22:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:15.734 16:22:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:15.734 killing process with pid 80243 00:09:15.734 16:22:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80243' 00:09:15.734 16:22:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 80243 00:09:15.734 [2024-11-28 16:22:07.274377] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:15.734 16:22:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 80243 00:09:15.734 [2024-11-28 16:22:07.300058] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:15.994 16:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:15.994 16:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Roh58BUEi8 00:09:15.994 16:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:15.994 16:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:15.994 16:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:15.994 16:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:15.994 16:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:15.994 16:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:15.994 00:09:15.994 real 0m3.293s 00:09:15.994 user 0m4.171s 00:09:15.994 sys 0m0.528s 00:09:15.994 16:22:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:15.994 16:22:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.994 ************************************ 00:09:15.994 END TEST raid_write_error_test 00:09:15.994 ************************************ 00:09:15.994 16:22:07 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:15.994 16:22:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:15.994 16:22:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:15.994 16:22:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:15.994 16:22:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.994 16:22:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:15.994 ************************************ 00:09:15.994 START TEST raid_state_function_test 00:09:15.994 ************************************ 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80376 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80376' 00:09:15.994 Process raid pid: 80376 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80376 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80376 ']' 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:15.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:15.994 16:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.994 [2024-11-28 16:22:07.708196] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:15.994 [2024-11-28 16:22:07.708332] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.254 [2024-11-28 16:22:07.869989] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.254 [2024-11-28 16:22:07.913457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.254 [2024-11-28 16:22:07.954345] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.254 [2024-11-28 16:22:07.954394] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.823 16:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.823 16:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:16.823 16:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:16.823 16:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.823 16:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.823 [2024-11-28 16:22:08.542981] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:16.823 [2024-11-28 16:22:08.543041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:16.823 [2024-11-28 16:22:08.543053] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:16.823 [2024-11-28 16:22:08.543062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:16.823 [2024-11-28 16:22:08.543068] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:16.823 [2024-11-28 16:22:08.543081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:16.823 [2024-11-28 16:22:08.543087] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:16.823 [2024-11-28 16:22:08.543095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:16.823 16:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.823 16:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:16.823 16:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.823 16:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.823 16:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.823 16:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.823 16:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:16.823 16:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.823 16:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.823 16:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.823 16:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.823 16:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.823 16:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.823 16:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.823 16:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.823 16:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.083 16:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.083 "name": "Existed_Raid", 00:09:17.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.083 "strip_size_kb": 64, 00:09:17.083 "state": "configuring", 00:09:17.083 "raid_level": "raid0", 00:09:17.083 "superblock": false, 00:09:17.083 "num_base_bdevs": 4, 00:09:17.083 "num_base_bdevs_discovered": 0, 00:09:17.083 "num_base_bdevs_operational": 4, 00:09:17.083 "base_bdevs_list": [ 00:09:17.083 { 00:09:17.083 "name": "BaseBdev1", 00:09:17.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.083 "is_configured": false, 00:09:17.083 "data_offset": 0, 00:09:17.083 "data_size": 0 00:09:17.083 }, 00:09:17.083 { 00:09:17.083 "name": "BaseBdev2", 00:09:17.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.083 "is_configured": false, 00:09:17.083 "data_offset": 0, 00:09:17.083 "data_size": 0 00:09:17.083 }, 00:09:17.083 { 00:09:17.083 "name": "BaseBdev3", 00:09:17.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.083 "is_configured": false, 00:09:17.083 "data_offset": 0, 00:09:17.083 "data_size": 0 00:09:17.083 }, 00:09:17.083 { 00:09:17.083 "name": "BaseBdev4", 00:09:17.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.083 "is_configured": false, 00:09:17.083 "data_offset": 0, 00:09:17.083 "data_size": 0 00:09:17.083 } 00:09:17.083 ] 00:09:17.083 }' 00:09:17.083 16:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.083 16:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.344 16:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:17.344 16:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.344 16:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.344 [2024-11-28 16:22:08.994076] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:17.344 [2024-11-28 16:22:08.994120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:17.344 16:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.344 16:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:17.344 16:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.344 16:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.344 [2024-11-28 16:22:09.006100] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:17.344 [2024-11-28 16:22:09.006146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:17.344 [2024-11-28 16:22:09.006154] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:17.344 [2024-11-28 16:22:09.006164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:17.344 [2024-11-28 16:22:09.006170] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:17.344 [2024-11-28 16:22:09.006178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:17.344 [2024-11-28 16:22:09.006184] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:17.344 [2024-11-28 16:22:09.006192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.344 [2024-11-28 16:22:09.026703] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.344 BaseBdev1 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.344 [ 00:09:17.344 { 00:09:17.344 "name": "BaseBdev1", 00:09:17.344 "aliases": [ 00:09:17.344 "ebf23898-0971-4f98-b7e1-1d5fb225a5f4" 00:09:17.344 ], 00:09:17.344 "product_name": "Malloc disk", 00:09:17.344 "block_size": 512, 00:09:17.344 "num_blocks": 65536, 00:09:17.344 "uuid": "ebf23898-0971-4f98-b7e1-1d5fb225a5f4", 00:09:17.344 "assigned_rate_limits": { 00:09:17.344 "rw_ios_per_sec": 0, 00:09:17.344 "rw_mbytes_per_sec": 0, 00:09:17.344 "r_mbytes_per_sec": 0, 00:09:17.344 "w_mbytes_per_sec": 0 00:09:17.344 }, 00:09:17.344 "claimed": true, 00:09:17.344 "claim_type": "exclusive_write", 00:09:17.344 "zoned": false, 00:09:17.344 "supported_io_types": { 00:09:17.344 "read": true, 00:09:17.344 "write": true, 00:09:17.344 "unmap": true, 00:09:17.344 "flush": true, 00:09:17.344 "reset": true, 00:09:17.344 "nvme_admin": false, 00:09:17.344 "nvme_io": false, 00:09:17.344 "nvme_io_md": false, 00:09:17.344 "write_zeroes": true, 00:09:17.344 "zcopy": true, 00:09:17.344 "get_zone_info": false, 00:09:17.344 "zone_management": false, 00:09:17.344 "zone_append": false, 00:09:17.344 "compare": false, 00:09:17.344 "compare_and_write": false, 00:09:17.344 "abort": true, 00:09:17.344 "seek_hole": false, 00:09:17.344 "seek_data": false, 00:09:17.344 "copy": true, 00:09:17.344 "nvme_iov_md": false 00:09:17.344 }, 00:09:17.344 "memory_domains": [ 00:09:17.344 { 00:09:17.344 "dma_device_id": "system", 00:09:17.344 "dma_device_type": 1 00:09:17.344 }, 00:09:17.344 { 00:09:17.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.344 "dma_device_type": 2 00:09:17.344 } 00:09:17.344 ], 00:09:17.344 "driver_specific": {} 00:09:17.344 } 00:09:17.344 ] 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.344 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.605 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.605 "name": "Existed_Raid", 00:09:17.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.605 "strip_size_kb": 64, 00:09:17.605 "state": "configuring", 00:09:17.605 "raid_level": "raid0", 00:09:17.605 "superblock": false, 00:09:17.605 "num_base_bdevs": 4, 00:09:17.605 "num_base_bdevs_discovered": 1, 00:09:17.605 "num_base_bdevs_operational": 4, 00:09:17.605 "base_bdevs_list": [ 00:09:17.605 { 00:09:17.605 "name": "BaseBdev1", 00:09:17.605 "uuid": "ebf23898-0971-4f98-b7e1-1d5fb225a5f4", 00:09:17.605 "is_configured": true, 00:09:17.605 "data_offset": 0, 00:09:17.605 "data_size": 65536 00:09:17.605 }, 00:09:17.605 { 00:09:17.605 "name": "BaseBdev2", 00:09:17.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.605 "is_configured": false, 00:09:17.605 "data_offset": 0, 00:09:17.605 "data_size": 0 00:09:17.605 }, 00:09:17.605 { 00:09:17.605 "name": "BaseBdev3", 00:09:17.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.605 "is_configured": false, 00:09:17.605 "data_offset": 0, 00:09:17.605 "data_size": 0 00:09:17.605 }, 00:09:17.605 { 00:09:17.605 "name": "BaseBdev4", 00:09:17.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.605 "is_configured": false, 00:09:17.605 "data_offset": 0, 00:09:17.605 "data_size": 0 00:09:17.605 } 00:09:17.605 ] 00:09:17.605 }' 00:09:17.605 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.605 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.865 [2024-11-28 16:22:09.465968] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:17.865 [2024-11-28 16:22:09.466017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.865 [2024-11-28 16:22:09.473991] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.865 [2024-11-28 16:22:09.475767] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:17.865 [2024-11-28 16:22:09.475824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:17.865 [2024-11-28 16:22:09.475833] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:17.865 [2024-11-28 16:22:09.475842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:17.865 [2024-11-28 16:22:09.475860] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:17.865 [2024-11-28 16:22:09.475869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.865 "name": "Existed_Raid", 00:09:17.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.865 "strip_size_kb": 64, 00:09:17.865 "state": "configuring", 00:09:17.865 "raid_level": "raid0", 00:09:17.865 "superblock": false, 00:09:17.865 "num_base_bdevs": 4, 00:09:17.865 "num_base_bdevs_discovered": 1, 00:09:17.865 "num_base_bdevs_operational": 4, 00:09:17.865 "base_bdevs_list": [ 00:09:17.865 { 00:09:17.865 "name": "BaseBdev1", 00:09:17.865 "uuid": "ebf23898-0971-4f98-b7e1-1d5fb225a5f4", 00:09:17.865 "is_configured": true, 00:09:17.865 "data_offset": 0, 00:09:17.865 "data_size": 65536 00:09:17.865 }, 00:09:17.865 { 00:09:17.865 "name": "BaseBdev2", 00:09:17.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.865 "is_configured": false, 00:09:17.865 "data_offset": 0, 00:09:17.865 "data_size": 0 00:09:17.865 }, 00:09:17.865 { 00:09:17.865 "name": "BaseBdev3", 00:09:17.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.865 "is_configured": false, 00:09:17.865 "data_offset": 0, 00:09:17.865 "data_size": 0 00:09:17.865 }, 00:09:17.865 { 00:09:17.865 "name": "BaseBdev4", 00:09:17.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.865 "is_configured": false, 00:09:17.865 "data_offset": 0, 00:09:17.865 "data_size": 0 00:09:17.865 } 00:09:17.865 ] 00:09:17.865 }' 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.865 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.435 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:18.435 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.435 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.435 [2024-11-28 16:22:09.926467] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:18.435 BaseBdev2 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.436 [ 00:09:18.436 { 00:09:18.436 "name": "BaseBdev2", 00:09:18.436 "aliases": [ 00:09:18.436 "09094920-9593-41f4-837a-4f6247f6b14a" 00:09:18.436 ], 00:09:18.436 "product_name": "Malloc disk", 00:09:18.436 "block_size": 512, 00:09:18.436 "num_blocks": 65536, 00:09:18.436 "uuid": "09094920-9593-41f4-837a-4f6247f6b14a", 00:09:18.436 "assigned_rate_limits": { 00:09:18.436 "rw_ios_per_sec": 0, 00:09:18.436 "rw_mbytes_per_sec": 0, 00:09:18.436 "r_mbytes_per_sec": 0, 00:09:18.436 "w_mbytes_per_sec": 0 00:09:18.436 }, 00:09:18.436 "claimed": true, 00:09:18.436 "claim_type": "exclusive_write", 00:09:18.436 "zoned": false, 00:09:18.436 "supported_io_types": { 00:09:18.436 "read": true, 00:09:18.436 "write": true, 00:09:18.436 "unmap": true, 00:09:18.436 "flush": true, 00:09:18.436 "reset": true, 00:09:18.436 "nvme_admin": false, 00:09:18.436 "nvme_io": false, 00:09:18.436 "nvme_io_md": false, 00:09:18.436 "write_zeroes": true, 00:09:18.436 "zcopy": true, 00:09:18.436 "get_zone_info": false, 00:09:18.436 "zone_management": false, 00:09:18.436 "zone_append": false, 00:09:18.436 "compare": false, 00:09:18.436 "compare_and_write": false, 00:09:18.436 "abort": true, 00:09:18.436 "seek_hole": false, 00:09:18.436 "seek_data": false, 00:09:18.436 "copy": true, 00:09:18.436 "nvme_iov_md": false 00:09:18.436 }, 00:09:18.436 "memory_domains": [ 00:09:18.436 { 00:09:18.436 "dma_device_id": "system", 00:09:18.436 "dma_device_type": 1 00:09:18.436 }, 00:09:18.436 { 00:09:18.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.436 "dma_device_type": 2 00:09:18.436 } 00:09:18.436 ], 00:09:18.436 "driver_specific": {} 00:09:18.436 } 00:09:18.436 ] 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.436 16:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.436 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.436 "name": "Existed_Raid", 00:09:18.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.436 "strip_size_kb": 64, 00:09:18.436 "state": "configuring", 00:09:18.436 "raid_level": "raid0", 00:09:18.436 "superblock": false, 00:09:18.436 "num_base_bdevs": 4, 00:09:18.436 "num_base_bdevs_discovered": 2, 00:09:18.436 "num_base_bdevs_operational": 4, 00:09:18.436 "base_bdevs_list": [ 00:09:18.436 { 00:09:18.436 "name": "BaseBdev1", 00:09:18.436 "uuid": "ebf23898-0971-4f98-b7e1-1d5fb225a5f4", 00:09:18.436 "is_configured": true, 00:09:18.436 "data_offset": 0, 00:09:18.436 "data_size": 65536 00:09:18.436 }, 00:09:18.436 { 00:09:18.436 "name": "BaseBdev2", 00:09:18.436 "uuid": "09094920-9593-41f4-837a-4f6247f6b14a", 00:09:18.436 "is_configured": true, 00:09:18.436 "data_offset": 0, 00:09:18.436 "data_size": 65536 00:09:18.436 }, 00:09:18.436 { 00:09:18.436 "name": "BaseBdev3", 00:09:18.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.436 "is_configured": false, 00:09:18.436 "data_offset": 0, 00:09:18.436 "data_size": 0 00:09:18.436 }, 00:09:18.436 { 00:09:18.436 "name": "BaseBdev4", 00:09:18.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.436 "is_configured": false, 00:09:18.436 "data_offset": 0, 00:09:18.436 "data_size": 0 00:09:18.436 } 00:09:18.436 ] 00:09:18.436 }' 00:09:18.436 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.436 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.696 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:18.696 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.696 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.696 [2024-11-28 16:22:10.448323] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:18.696 BaseBdev3 00:09:18.696 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.696 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:18.696 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:18.696 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:18.696 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:18.696 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:18.696 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:18.696 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:18.696 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.696 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.696 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.696 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:18.696 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.696 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.956 [ 00:09:18.956 { 00:09:18.956 "name": "BaseBdev3", 00:09:18.956 "aliases": [ 00:09:18.956 "415aee8b-d425-4d07-a725-c28ebf35d331" 00:09:18.956 ], 00:09:18.956 "product_name": "Malloc disk", 00:09:18.956 "block_size": 512, 00:09:18.956 "num_blocks": 65536, 00:09:18.956 "uuid": "415aee8b-d425-4d07-a725-c28ebf35d331", 00:09:18.956 "assigned_rate_limits": { 00:09:18.956 "rw_ios_per_sec": 0, 00:09:18.956 "rw_mbytes_per_sec": 0, 00:09:18.956 "r_mbytes_per_sec": 0, 00:09:18.956 "w_mbytes_per_sec": 0 00:09:18.956 }, 00:09:18.956 "claimed": true, 00:09:18.956 "claim_type": "exclusive_write", 00:09:18.956 "zoned": false, 00:09:18.956 "supported_io_types": { 00:09:18.956 "read": true, 00:09:18.956 "write": true, 00:09:18.956 "unmap": true, 00:09:18.956 "flush": true, 00:09:18.956 "reset": true, 00:09:18.956 "nvme_admin": false, 00:09:18.956 "nvme_io": false, 00:09:18.956 "nvme_io_md": false, 00:09:18.956 "write_zeroes": true, 00:09:18.956 "zcopy": true, 00:09:18.956 "get_zone_info": false, 00:09:18.956 "zone_management": false, 00:09:18.956 "zone_append": false, 00:09:18.956 "compare": false, 00:09:18.956 "compare_and_write": false, 00:09:18.956 "abort": true, 00:09:18.956 "seek_hole": false, 00:09:18.956 "seek_data": false, 00:09:18.956 "copy": true, 00:09:18.956 "nvme_iov_md": false 00:09:18.956 }, 00:09:18.956 "memory_domains": [ 00:09:18.956 { 00:09:18.956 "dma_device_id": "system", 00:09:18.956 "dma_device_type": 1 00:09:18.956 }, 00:09:18.956 { 00:09:18.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.956 "dma_device_type": 2 00:09:18.956 } 00:09:18.956 ], 00:09:18.956 "driver_specific": {} 00:09:18.956 } 00:09:18.956 ] 00:09:18.956 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.956 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:18.956 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:18.956 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:18.956 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:18.956 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.957 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.957 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.957 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.957 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:18.957 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.957 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.957 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.957 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.957 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.957 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.957 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.957 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.957 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.957 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.957 "name": "Existed_Raid", 00:09:18.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.957 "strip_size_kb": 64, 00:09:18.957 "state": "configuring", 00:09:18.957 "raid_level": "raid0", 00:09:18.957 "superblock": false, 00:09:18.957 "num_base_bdevs": 4, 00:09:18.957 "num_base_bdevs_discovered": 3, 00:09:18.957 "num_base_bdevs_operational": 4, 00:09:18.957 "base_bdevs_list": [ 00:09:18.957 { 00:09:18.957 "name": "BaseBdev1", 00:09:18.957 "uuid": "ebf23898-0971-4f98-b7e1-1d5fb225a5f4", 00:09:18.957 "is_configured": true, 00:09:18.957 "data_offset": 0, 00:09:18.957 "data_size": 65536 00:09:18.957 }, 00:09:18.957 { 00:09:18.957 "name": "BaseBdev2", 00:09:18.957 "uuid": "09094920-9593-41f4-837a-4f6247f6b14a", 00:09:18.957 "is_configured": true, 00:09:18.957 "data_offset": 0, 00:09:18.957 "data_size": 65536 00:09:18.957 }, 00:09:18.957 { 00:09:18.957 "name": "BaseBdev3", 00:09:18.957 "uuid": "415aee8b-d425-4d07-a725-c28ebf35d331", 00:09:18.957 "is_configured": true, 00:09:18.957 "data_offset": 0, 00:09:18.957 "data_size": 65536 00:09:18.957 }, 00:09:18.957 { 00:09:18.957 "name": "BaseBdev4", 00:09:18.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.957 "is_configured": false, 00:09:18.957 "data_offset": 0, 00:09:18.957 "data_size": 0 00:09:18.957 } 00:09:18.957 ] 00:09:18.957 }' 00:09:18.957 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.957 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.217 [2024-11-28 16:22:10.922368] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:19.217 [2024-11-28 16:22:10.922416] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:19.217 [2024-11-28 16:22:10.922426] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:19.217 [2024-11-28 16:22:10.922694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:19.217 [2024-11-28 16:22:10.922856] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:19.217 [2024-11-28 16:22:10.922881] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:19.217 [2024-11-28 16:22:10.923104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.217 BaseBdev4 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.217 [ 00:09:19.217 { 00:09:19.217 "name": "BaseBdev4", 00:09:19.217 "aliases": [ 00:09:19.217 "03bdfa1c-f59f-41bf-93c2-cf5ab0a0896c" 00:09:19.217 ], 00:09:19.217 "product_name": "Malloc disk", 00:09:19.217 "block_size": 512, 00:09:19.217 "num_blocks": 65536, 00:09:19.217 "uuid": "03bdfa1c-f59f-41bf-93c2-cf5ab0a0896c", 00:09:19.217 "assigned_rate_limits": { 00:09:19.217 "rw_ios_per_sec": 0, 00:09:19.217 "rw_mbytes_per_sec": 0, 00:09:19.217 "r_mbytes_per_sec": 0, 00:09:19.217 "w_mbytes_per_sec": 0 00:09:19.217 }, 00:09:19.217 "claimed": true, 00:09:19.217 "claim_type": "exclusive_write", 00:09:19.217 "zoned": false, 00:09:19.217 "supported_io_types": { 00:09:19.217 "read": true, 00:09:19.217 "write": true, 00:09:19.217 "unmap": true, 00:09:19.217 "flush": true, 00:09:19.217 "reset": true, 00:09:19.217 "nvme_admin": false, 00:09:19.217 "nvme_io": false, 00:09:19.217 "nvme_io_md": false, 00:09:19.217 "write_zeroes": true, 00:09:19.217 "zcopy": true, 00:09:19.217 "get_zone_info": false, 00:09:19.217 "zone_management": false, 00:09:19.217 "zone_append": false, 00:09:19.217 "compare": false, 00:09:19.217 "compare_and_write": false, 00:09:19.217 "abort": true, 00:09:19.217 "seek_hole": false, 00:09:19.217 "seek_data": false, 00:09:19.217 "copy": true, 00:09:19.217 "nvme_iov_md": false 00:09:19.217 }, 00:09:19.217 "memory_domains": [ 00:09:19.217 { 00:09:19.217 "dma_device_id": "system", 00:09:19.217 "dma_device_type": 1 00:09:19.217 }, 00:09:19.217 { 00:09:19.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.217 "dma_device_type": 2 00:09:19.217 } 00:09:19.217 ], 00:09:19.217 "driver_specific": {} 00:09:19.217 } 00:09:19.217 ] 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.217 16:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.477 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.477 "name": "Existed_Raid", 00:09:19.477 "uuid": "a0fd3821-b2f7-4bfa-8a33-95240019c1dd", 00:09:19.477 "strip_size_kb": 64, 00:09:19.477 "state": "online", 00:09:19.477 "raid_level": "raid0", 00:09:19.477 "superblock": false, 00:09:19.477 "num_base_bdevs": 4, 00:09:19.477 "num_base_bdevs_discovered": 4, 00:09:19.477 "num_base_bdevs_operational": 4, 00:09:19.477 "base_bdevs_list": [ 00:09:19.477 { 00:09:19.477 "name": "BaseBdev1", 00:09:19.477 "uuid": "ebf23898-0971-4f98-b7e1-1d5fb225a5f4", 00:09:19.477 "is_configured": true, 00:09:19.477 "data_offset": 0, 00:09:19.477 "data_size": 65536 00:09:19.477 }, 00:09:19.477 { 00:09:19.477 "name": "BaseBdev2", 00:09:19.477 "uuid": "09094920-9593-41f4-837a-4f6247f6b14a", 00:09:19.477 "is_configured": true, 00:09:19.477 "data_offset": 0, 00:09:19.477 "data_size": 65536 00:09:19.477 }, 00:09:19.477 { 00:09:19.477 "name": "BaseBdev3", 00:09:19.477 "uuid": "415aee8b-d425-4d07-a725-c28ebf35d331", 00:09:19.477 "is_configured": true, 00:09:19.477 "data_offset": 0, 00:09:19.477 "data_size": 65536 00:09:19.477 }, 00:09:19.477 { 00:09:19.477 "name": "BaseBdev4", 00:09:19.477 "uuid": "03bdfa1c-f59f-41bf-93c2-cf5ab0a0896c", 00:09:19.477 "is_configured": true, 00:09:19.477 "data_offset": 0, 00:09:19.477 "data_size": 65536 00:09:19.477 } 00:09:19.477 ] 00:09:19.477 }' 00:09:19.477 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.477 16:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.737 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:19.737 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:19.737 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:19.737 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:19.737 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:19.737 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:19.737 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:19.737 16:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.737 16:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.737 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:19.737 [2024-11-28 16:22:11.437825] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.737 16:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.737 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:19.737 "name": "Existed_Raid", 00:09:19.737 "aliases": [ 00:09:19.737 "a0fd3821-b2f7-4bfa-8a33-95240019c1dd" 00:09:19.737 ], 00:09:19.737 "product_name": "Raid Volume", 00:09:19.737 "block_size": 512, 00:09:19.737 "num_blocks": 262144, 00:09:19.737 "uuid": "a0fd3821-b2f7-4bfa-8a33-95240019c1dd", 00:09:19.737 "assigned_rate_limits": { 00:09:19.737 "rw_ios_per_sec": 0, 00:09:19.737 "rw_mbytes_per_sec": 0, 00:09:19.737 "r_mbytes_per_sec": 0, 00:09:19.737 "w_mbytes_per_sec": 0 00:09:19.737 }, 00:09:19.737 "claimed": false, 00:09:19.737 "zoned": false, 00:09:19.737 "supported_io_types": { 00:09:19.737 "read": true, 00:09:19.737 "write": true, 00:09:19.737 "unmap": true, 00:09:19.737 "flush": true, 00:09:19.737 "reset": true, 00:09:19.737 "nvme_admin": false, 00:09:19.737 "nvme_io": false, 00:09:19.737 "nvme_io_md": false, 00:09:19.737 "write_zeroes": true, 00:09:19.738 "zcopy": false, 00:09:19.738 "get_zone_info": false, 00:09:19.738 "zone_management": false, 00:09:19.738 "zone_append": false, 00:09:19.738 "compare": false, 00:09:19.738 "compare_and_write": false, 00:09:19.738 "abort": false, 00:09:19.738 "seek_hole": false, 00:09:19.738 "seek_data": false, 00:09:19.738 "copy": false, 00:09:19.738 "nvme_iov_md": false 00:09:19.738 }, 00:09:19.738 "memory_domains": [ 00:09:19.738 { 00:09:19.738 "dma_device_id": "system", 00:09:19.738 "dma_device_type": 1 00:09:19.738 }, 00:09:19.738 { 00:09:19.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.738 "dma_device_type": 2 00:09:19.738 }, 00:09:19.738 { 00:09:19.738 "dma_device_id": "system", 00:09:19.738 "dma_device_type": 1 00:09:19.738 }, 00:09:19.738 { 00:09:19.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.738 "dma_device_type": 2 00:09:19.738 }, 00:09:19.738 { 00:09:19.738 "dma_device_id": "system", 00:09:19.738 "dma_device_type": 1 00:09:19.738 }, 00:09:19.738 { 00:09:19.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.738 "dma_device_type": 2 00:09:19.738 }, 00:09:19.738 { 00:09:19.738 "dma_device_id": "system", 00:09:19.738 "dma_device_type": 1 00:09:19.738 }, 00:09:19.738 { 00:09:19.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.738 "dma_device_type": 2 00:09:19.738 } 00:09:19.738 ], 00:09:19.738 "driver_specific": { 00:09:19.738 "raid": { 00:09:19.738 "uuid": "a0fd3821-b2f7-4bfa-8a33-95240019c1dd", 00:09:19.738 "strip_size_kb": 64, 00:09:19.738 "state": "online", 00:09:19.738 "raid_level": "raid0", 00:09:19.738 "superblock": false, 00:09:19.738 "num_base_bdevs": 4, 00:09:19.738 "num_base_bdevs_discovered": 4, 00:09:19.738 "num_base_bdevs_operational": 4, 00:09:19.738 "base_bdevs_list": [ 00:09:19.738 { 00:09:19.738 "name": "BaseBdev1", 00:09:19.738 "uuid": "ebf23898-0971-4f98-b7e1-1d5fb225a5f4", 00:09:19.738 "is_configured": true, 00:09:19.738 "data_offset": 0, 00:09:19.738 "data_size": 65536 00:09:19.738 }, 00:09:19.738 { 00:09:19.738 "name": "BaseBdev2", 00:09:19.738 "uuid": "09094920-9593-41f4-837a-4f6247f6b14a", 00:09:19.738 "is_configured": true, 00:09:19.738 "data_offset": 0, 00:09:19.738 "data_size": 65536 00:09:19.738 }, 00:09:19.738 { 00:09:19.738 "name": "BaseBdev3", 00:09:19.738 "uuid": "415aee8b-d425-4d07-a725-c28ebf35d331", 00:09:19.738 "is_configured": true, 00:09:19.738 "data_offset": 0, 00:09:19.738 "data_size": 65536 00:09:19.738 }, 00:09:19.738 { 00:09:19.738 "name": "BaseBdev4", 00:09:19.738 "uuid": "03bdfa1c-f59f-41bf-93c2-cf5ab0a0896c", 00:09:19.738 "is_configured": true, 00:09:19.738 "data_offset": 0, 00:09:19.738 "data_size": 65536 00:09:19.738 } 00:09:19.738 ] 00:09:19.738 } 00:09:19.738 } 00:09:19.738 }' 00:09:19.738 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:19.738 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:19.738 BaseBdev2 00:09:19.738 BaseBdev3 00:09:19.738 BaseBdev4' 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.998 16:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.998 [2024-11-28 16:22:11.737019] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:19.998 [2024-11-28 16:22:11.737051] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.999 [2024-11-28 16:22:11.737096] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.999 16:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.999 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:19.999 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:19.999 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:19.999 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:19.999 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:19.999 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:19.999 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.999 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:19.999 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.999 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.999 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.999 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.999 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.999 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.999 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.999 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.999 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.999 16:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.999 16:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.259 16:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.259 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.259 "name": "Existed_Raid", 00:09:20.259 "uuid": "a0fd3821-b2f7-4bfa-8a33-95240019c1dd", 00:09:20.259 "strip_size_kb": 64, 00:09:20.259 "state": "offline", 00:09:20.259 "raid_level": "raid0", 00:09:20.259 "superblock": false, 00:09:20.259 "num_base_bdevs": 4, 00:09:20.259 "num_base_bdevs_discovered": 3, 00:09:20.259 "num_base_bdevs_operational": 3, 00:09:20.259 "base_bdevs_list": [ 00:09:20.259 { 00:09:20.259 "name": null, 00:09:20.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.259 "is_configured": false, 00:09:20.259 "data_offset": 0, 00:09:20.259 "data_size": 65536 00:09:20.259 }, 00:09:20.259 { 00:09:20.259 "name": "BaseBdev2", 00:09:20.259 "uuid": "09094920-9593-41f4-837a-4f6247f6b14a", 00:09:20.259 "is_configured": true, 00:09:20.259 "data_offset": 0, 00:09:20.259 "data_size": 65536 00:09:20.259 }, 00:09:20.259 { 00:09:20.259 "name": "BaseBdev3", 00:09:20.259 "uuid": "415aee8b-d425-4d07-a725-c28ebf35d331", 00:09:20.259 "is_configured": true, 00:09:20.259 "data_offset": 0, 00:09:20.259 "data_size": 65536 00:09:20.259 }, 00:09:20.259 { 00:09:20.259 "name": "BaseBdev4", 00:09:20.259 "uuid": "03bdfa1c-f59f-41bf-93c2-cf5ab0a0896c", 00:09:20.259 "is_configured": true, 00:09:20.259 "data_offset": 0, 00:09:20.259 "data_size": 65536 00:09:20.259 } 00:09:20.259 ] 00:09:20.259 }' 00:09:20.259 16:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.259 16:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.519 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:20.519 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:20.519 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:20.519 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.519 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.519 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.519 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.519 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:20.519 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:20.519 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:20.519 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.519 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.519 [2024-11-28 16:22:12.243245] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:20.519 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.519 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:20.519 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:20.519 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.519 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:20.519 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.519 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.519 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.780 [2024-11-28 16:22:12.314160] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.780 [2024-11-28 16:22:12.380974] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:20.780 [2024-11-28 16:22:12.381019] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.780 BaseBdev2 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.780 [ 00:09:20.780 { 00:09:20.780 "name": "BaseBdev2", 00:09:20.780 "aliases": [ 00:09:20.780 "623579ad-28e0-40a2-a273-1482fedd4e83" 00:09:20.780 ], 00:09:20.780 "product_name": "Malloc disk", 00:09:20.780 "block_size": 512, 00:09:20.780 "num_blocks": 65536, 00:09:20.780 "uuid": "623579ad-28e0-40a2-a273-1482fedd4e83", 00:09:20.780 "assigned_rate_limits": { 00:09:20.780 "rw_ios_per_sec": 0, 00:09:20.780 "rw_mbytes_per_sec": 0, 00:09:20.780 "r_mbytes_per_sec": 0, 00:09:20.780 "w_mbytes_per_sec": 0 00:09:20.780 }, 00:09:20.780 "claimed": false, 00:09:20.780 "zoned": false, 00:09:20.780 "supported_io_types": { 00:09:20.780 "read": true, 00:09:20.780 "write": true, 00:09:20.780 "unmap": true, 00:09:20.780 "flush": true, 00:09:20.780 "reset": true, 00:09:20.780 "nvme_admin": false, 00:09:20.780 "nvme_io": false, 00:09:20.780 "nvme_io_md": false, 00:09:20.780 "write_zeroes": true, 00:09:20.780 "zcopy": true, 00:09:20.780 "get_zone_info": false, 00:09:20.780 "zone_management": false, 00:09:20.780 "zone_append": false, 00:09:20.780 "compare": false, 00:09:20.780 "compare_and_write": false, 00:09:20.780 "abort": true, 00:09:20.780 "seek_hole": false, 00:09:20.780 "seek_data": false, 00:09:20.780 "copy": true, 00:09:20.780 "nvme_iov_md": false 00:09:20.780 }, 00:09:20.780 "memory_domains": [ 00:09:20.780 { 00:09:20.780 "dma_device_id": "system", 00:09:20.780 "dma_device_type": 1 00:09:20.780 }, 00:09:20.780 { 00:09:20.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.780 "dma_device_type": 2 00:09:20.780 } 00:09:20.780 ], 00:09:20.780 "driver_specific": {} 00:09:20.780 } 00:09:20.780 ] 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.780 BaseBdev3 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.780 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.780 [ 00:09:20.780 { 00:09:20.780 "name": "BaseBdev3", 00:09:20.780 "aliases": [ 00:09:20.780 "aaf9eb3c-bb0b-431d-8dfd-c5136fd2b5ab" 00:09:20.780 ], 00:09:20.780 "product_name": "Malloc disk", 00:09:20.780 "block_size": 512, 00:09:20.780 "num_blocks": 65536, 00:09:20.780 "uuid": "aaf9eb3c-bb0b-431d-8dfd-c5136fd2b5ab", 00:09:20.780 "assigned_rate_limits": { 00:09:20.780 "rw_ios_per_sec": 0, 00:09:20.780 "rw_mbytes_per_sec": 0, 00:09:20.781 "r_mbytes_per_sec": 0, 00:09:20.781 "w_mbytes_per_sec": 0 00:09:20.781 }, 00:09:20.781 "claimed": false, 00:09:20.781 "zoned": false, 00:09:20.781 "supported_io_types": { 00:09:20.781 "read": true, 00:09:20.781 "write": true, 00:09:20.781 "unmap": true, 00:09:20.781 "flush": true, 00:09:20.781 "reset": true, 00:09:20.781 "nvme_admin": false, 00:09:20.781 "nvme_io": false, 00:09:20.781 "nvme_io_md": false, 00:09:20.781 "write_zeroes": true, 00:09:20.781 "zcopy": true, 00:09:20.781 "get_zone_info": false, 00:09:20.781 "zone_management": false, 00:09:20.781 "zone_append": false, 00:09:20.781 "compare": false, 00:09:20.781 "compare_and_write": false, 00:09:20.781 "abort": true, 00:09:20.781 "seek_hole": false, 00:09:20.781 "seek_data": false, 00:09:20.781 "copy": true, 00:09:20.781 "nvme_iov_md": false 00:09:20.781 }, 00:09:20.781 "memory_domains": [ 00:09:20.781 { 00:09:20.781 "dma_device_id": "system", 00:09:20.781 "dma_device_type": 1 00:09:20.781 }, 00:09:20.781 { 00:09:20.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.781 "dma_device_type": 2 00:09:20.781 } 00:09:20.781 ], 00:09:20.781 "driver_specific": {} 00:09:20.781 } 00:09:20.781 ] 00:09:20.781 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.781 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:20.781 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:20.781 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:20.781 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:20.781 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.781 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.041 BaseBdev4 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.041 [ 00:09:21.041 { 00:09:21.041 "name": "BaseBdev4", 00:09:21.041 "aliases": [ 00:09:21.041 "fca7be3d-82dd-4010-b1ab-f02d6d0d4c62" 00:09:21.041 ], 00:09:21.041 "product_name": "Malloc disk", 00:09:21.041 "block_size": 512, 00:09:21.041 "num_blocks": 65536, 00:09:21.041 "uuid": "fca7be3d-82dd-4010-b1ab-f02d6d0d4c62", 00:09:21.041 "assigned_rate_limits": { 00:09:21.041 "rw_ios_per_sec": 0, 00:09:21.041 "rw_mbytes_per_sec": 0, 00:09:21.041 "r_mbytes_per_sec": 0, 00:09:21.041 "w_mbytes_per_sec": 0 00:09:21.041 }, 00:09:21.041 "claimed": false, 00:09:21.041 "zoned": false, 00:09:21.041 "supported_io_types": { 00:09:21.041 "read": true, 00:09:21.041 "write": true, 00:09:21.041 "unmap": true, 00:09:21.041 "flush": true, 00:09:21.041 "reset": true, 00:09:21.041 "nvme_admin": false, 00:09:21.041 "nvme_io": false, 00:09:21.041 "nvme_io_md": false, 00:09:21.041 "write_zeroes": true, 00:09:21.041 "zcopy": true, 00:09:21.041 "get_zone_info": false, 00:09:21.041 "zone_management": false, 00:09:21.041 "zone_append": false, 00:09:21.041 "compare": false, 00:09:21.041 "compare_and_write": false, 00:09:21.041 "abort": true, 00:09:21.041 "seek_hole": false, 00:09:21.041 "seek_data": false, 00:09:21.041 "copy": true, 00:09:21.041 "nvme_iov_md": false 00:09:21.041 }, 00:09:21.041 "memory_domains": [ 00:09:21.041 { 00:09:21.041 "dma_device_id": "system", 00:09:21.041 "dma_device_type": 1 00:09:21.041 }, 00:09:21.041 { 00:09:21.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.041 "dma_device_type": 2 00:09:21.041 } 00:09:21.041 ], 00:09:21.041 "driver_specific": {} 00:09:21.041 } 00:09:21.041 ] 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.041 [2024-11-28 16:22:12.607865] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:21.041 [2024-11-28 16:22:12.607932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:21.041 [2024-11-28 16:22:12.607953] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.041 [2024-11-28 16:22:12.609759] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:21.041 [2024-11-28 16:22:12.609828] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.041 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.042 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.042 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.042 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.042 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.042 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.042 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.042 "name": "Existed_Raid", 00:09:21.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.042 "strip_size_kb": 64, 00:09:21.042 "state": "configuring", 00:09:21.042 "raid_level": "raid0", 00:09:21.042 "superblock": false, 00:09:21.042 "num_base_bdevs": 4, 00:09:21.042 "num_base_bdevs_discovered": 3, 00:09:21.042 "num_base_bdevs_operational": 4, 00:09:21.042 "base_bdevs_list": [ 00:09:21.042 { 00:09:21.042 "name": "BaseBdev1", 00:09:21.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.042 "is_configured": false, 00:09:21.042 "data_offset": 0, 00:09:21.042 "data_size": 0 00:09:21.042 }, 00:09:21.042 { 00:09:21.042 "name": "BaseBdev2", 00:09:21.042 "uuid": "623579ad-28e0-40a2-a273-1482fedd4e83", 00:09:21.042 "is_configured": true, 00:09:21.042 "data_offset": 0, 00:09:21.042 "data_size": 65536 00:09:21.042 }, 00:09:21.042 { 00:09:21.042 "name": "BaseBdev3", 00:09:21.042 "uuid": "aaf9eb3c-bb0b-431d-8dfd-c5136fd2b5ab", 00:09:21.042 "is_configured": true, 00:09:21.042 "data_offset": 0, 00:09:21.042 "data_size": 65536 00:09:21.042 }, 00:09:21.042 { 00:09:21.042 "name": "BaseBdev4", 00:09:21.042 "uuid": "fca7be3d-82dd-4010-b1ab-f02d6d0d4c62", 00:09:21.042 "is_configured": true, 00:09:21.042 "data_offset": 0, 00:09:21.042 "data_size": 65536 00:09:21.042 } 00:09:21.042 ] 00:09:21.042 }' 00:09:21.042 16:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.042 16:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.302 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:21.302 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.302 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.302 [2024-11-28 16:22:13.015147] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:21.302 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.302 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:21.302 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.302 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.302 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.302 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.302 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:21.302 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.302 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.302 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.302 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.302 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.302 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.302 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.302 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.302 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.562 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.562 "name": "Existed_Raid", 00:09:21.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.562 "strip_size_kb": 64, 00:09:21.562 "state": "configuring", 00:09:21.562 "raid_level": "raid0", 00:09:21.562 "superblock": false, 00:09:21.562 "num_base_bdevs": 4, 00:09:21.562 "num_base_bdevs_discovered": 2, 00:09:21.562 "num_base_bdevs_operational": 4, 00:09:21.562 "base_bdevs_list": [ 00:09:21.562 { 00:09:21.562 "name": "BaseBdev1", 00:09:21.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.562 "is_configured": false, 00:09:21.562 "data_offset": 0, 00:09:21.562 "data_size": 0 00:09:21.562 }, 00:09:21.562 { 00:09:21.562 "name": null, 00:09:21.562 "uuid": "623579ad-28e0-40a2-a273-1482fedd4e83", 00:09:21.562 "is_configured": false, 00:09:21.562 "data_offset": 0, 00:09:21.562 "data_size": 65536 00:09:21.562 }, 00:09:21.562 { 00:09:21.562 "name": "BaseBdev3", 00:09:21.562 "uuid": "aaf9eb3c-bb0b-431d-8dfd-c5136fd2b5ab", 00:09:21.562 "is_configured": true, 00:09:21.562 "data_offset": 0, 00:09:21.562 "data_size": 65536 00:09:21.562 }, 00:09:21.562 { 00:09:21.562 "name": "BaseBdev4", 00:09:21.562 "uuid": "fca7be3d-82dd-4010-b1ab-f02d6d0d4c62", 00:09:21.562 "is_configured": true, 00:09:21.562 "data_offset": 0, 00:09:21.562 "data_size": 65536 00:09:21.562 } 00:09:21.562 ] 00:09:21.562 }' 00:09:21.562 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.562 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.836 [2024-11-28 16:22:13.469156] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:21.836 BaseBdev1 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.836 [ 00:09:21.836 { 00:09:21.836 "name": "BaseBdev1", 00:09:21.836 "aliases": [ 00:09:21.836 "f5dd455d-66b6-474f-96f5-23c5c7974713" 00:09:21.836 ], 00:09:21.836 "product_name": "Malloc disk", 00:09:21.836 "block_size": 512, 00:09:21.836 "num_blocks": 65536, 00:09:21.836 "uuid": "f5dd455d-66b6-474f-96f5-23c5c7974713", 00:09:21.836 "assigned_rate_limits": { 00:09:21.836 "rw_ios_per_sec": 0, 00:09:21.836 "rw_mbytes_per_sec": 0, 00:09:21.836 "r_mbytes_per_sec": 0, 00:09:21.836 "w_mbytes_per_sec": 0 00:09:21.836 }, 00:09:21.836 "claimed": true, 00:09:21.836 "claim_type": "exclusive_write", 00:09:21.836 "zoned": false, 00:09:21.836 "supported_io_types": { 00:09:21.836 "read": true, 00:09:21.836 "write": true, 00:09:21.836 "unmap": true, 00:09:21.836 "flush": true, 00:09:21.836 "reset": true, 00:09:21.836 "nvme_admin": false, 00:09:21.836 "nvme_io": false, 00:09:21.836 "nvme_io_md": false, 00:09:21.836 "write_zeroes": true, 00:09:21.836 "zcopy": true, 00:09:21.836 "get_zone_info": false, 00:09:21.836 "zone_management": false, 00:09:21.836 "zone_append": false, 00:09:21.836 "compare": false, 00:09:21.836 "compare_and_write": false, 00:09:21.836 "abort": true, 00:09:21.836 "seek_hole": false, 00:09:21.836 "seek_data": false, 00:09:21.836 "copy": true, 00:09:21.836 "nvme_iov_md": false 00:09:21.836 }, 00:09:21.836 "memory_domains": [ 00:09:21.836 { 00:09:21.836 "dma_device_id": "system", 00:09:21.836 "dma_device_type": 1 00:09:21.836 }, 00:09:21.836 { 00:09:21.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.836 "dma_device_type": 2 00:09:21.836 } 00:09:21.836 ], 00:09:21.836 "driver_specific": {} 00:09:21.836 } 00:09:21.836 ] 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.836 "name": "Existed_Raid", 00:09:21.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.836 "strip_size_kb": 64, 00:09:21.836 "state": "configuring", 00:09:21.836 "raid_level": "raid0", 00:09:21.836 "superblock": false, 00:09:21.836 "num_base_bdevs": 4, 00:09:21.836 "num_base_bdevs_discovered": 3, 00:09:21.836 "num_base_bdevs_operational": 4, 00:09:21.836 "base_bdevs_list": [ 00:09:21.836 { 00:09:21.836 "name": "BaseBdev1", 00:09:21.836 "uuid": "f5dd455d-66b6-474f-96f5-23c5c7974713", 00:09:21.836 "is_configured": true, 00:09:21.836 "data_offset": 0, 00:09:21.836 "data_size": 65536 00:09:21.836 }, 00:09:21.836 { 00:09:21.836 "name": null, 00:09:21.836 "uuid": "623579ad-28e0-40a2-a273-1482fedd4e83", 00:09:21.836 "is_configured": false, 00:09:21.836 "data_offset": 0, 00:09:21.836 "data_size": 65536 00:09:21.836 }, 00:09:21.836 { 00:09:21.836 "name": "BaseBdev3", 00:09:21.836 "uuid": "aaf9eb3c-bb0b-431d-8dfd-c5136fd2b5ab", 00:09:21.836 "is_configured": true, 00:09:21.836 "data_offset": 0, 00:09:21.836 "data_size": 65536 00:09:21.836 }, 00:09:21.836 { 00:09:21.836 "name": "BaseBdev4", 00:09:21.836 "uuid": "fca7be3d-82dd-4010-b1ab-f02d6d0d4c62", 00:09:21.836 "is_configured": true, 00:09:21.836 "data_offset": 0, 00:09:21.836 "data_size": 65536 00:09:21.836 } 00:09:21.836 ] 00:09:21.836 }' 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.836 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.455 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.455 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:22.455 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.455 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.455 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.455 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:22.455 16:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:22.455 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.455 16:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.455 [2024-11-28 16:22:14.000304] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:22.455 16:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.455 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:22.455 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.455 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.455 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.455 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.455 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:22.455 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.455 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.455 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.455 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.455 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.455 16:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.455 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.455 16:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.455 16:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.455 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.455 "name": "Existed_Raid", 00:09:22.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.455 "strip_size_kb": 64, 00:09:22.455 "state": "configuring", 00:09:22.455 "raid_level": "raid0", 00:09:22.455 "superblock": false, 00:09:22.455 "num_base_bdevs": 4, 00:09:22.455 "num_base_bdevs_discovered": 2, 00:09:22.455 "num_base_bdevs_operational": 4, 00:09:22.455 "base_bdevs_list": [ 00:09:22.455 { 00:09:22.455 "name": "BaseBdev1", 00:09:22.455 "uuid": "f5dd455d-66b6-474f-96f5-23c5c7974713", 00:09:22.455 "is_configured": true, 00:09:22.455 "data_offset": 0, 00:09:22.455 "data_size": 65536 00:09:22.455 }, 00:09:22.455 { 00:09:22.455 "name": null, 00:09:22.455 "uuid": "623579ad-28e0-40a2-a273-1482fedd4e83", 00:09:22.455 "is_configured": false, 00:09:22.455 "data_offset": 0, 00:09:22.455 "data_size": 65536 00:09:22.455 }, 00:09:22.455 { 00:09:22.455 "name": null, 00:09:22.455 "uuid": "aaf9eb3c-bb0b-431d-8dfd-c5136fd2b5ab", 00:09:22.455 "is_configured": false, 00:09:22.455 "data_offset": 0, 00:09:22.455 "data_size": 65536 00:09:22.455 }, 00:09:22.455 { 00:09:22.455 "name": "BaseBdev4", 00:09:22.455 "uuid": "fca7be3d-82dd-4010-b1ab-f02d6d0d4c62", 00:09:22.455 "is_configured": true, 00:09:22.455 "data_offset": 0, 00:09:22.455 "data_size": 65536 00:09:22.455 } 00:09:22.455 ] 00:09:22.455 }' 00:09:22.455 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.455 16:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.724 [2024-11-28 16:22:14.475551] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.724 16:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.983 16:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.983 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.983 "name": "Existed_Raid", 00:09:22.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.983 "strip_size_kb": 64, 00:09:22.983 "state": "configuring", 00:09:22.983 "raid_level": "raid0", 00:09:22.983 "superblock": false, 00:09:22.983 "num_base_bdevs": 4, 00:09:22.983 "num_base_bdevs_discovered": 3, 00:09:22.983 "num_base_bdevs_operational": 4, 00:09:22.983 "base_bdevs_list": [ 00:09:22.983 { 00:09:22.983 "name": "BaseBdev1", 00:09:22.983 "uuid": "f5dd455d-66b6-474f-96f5-23c5c7974713", 00:09:22.983 "is_configured": true, 00:09:22.983 "data_offset": 0, 00:09:22.983 "data_size": 65536 00:09:22.983 }, 00:09:22.983 { 00:09:22.983 "name": null, 00:09:22.983 "uuid": "623579ad-28e0-40a2-a273-1482fedd4e83", 00:09:22.983 "is_configured": false, 00:09:22.983 "data_offset": 0, 00:09:22.983 "data_size": 65536 00:09:22.983 }, 00:09:22.983 { 00:09:22.983 "name": "BaseBdev3", 00:09:22.983 "uuid": "aaf9eb3c-bb0b-431d-8dfd-c5136fd2b5ab", 00:09:22.983 "is_configured": true, 00:09:22.983 "data_offset": 0, 00:09:22.983 "data_size": 65536 00:09:22.983 }, 00:09:22.983 { 00:09:22.983 "name": "BaseBdev4", 00:09:22.983 "uuid": "fca7be3d-82dd-4010-b1ab-f02d6d0d4c62", 00:09:22.983 "is_configured": true, 00:09:22.983 "data_offset": 0, 00:09:22.983 "data_size": 65536 00:09:22.983 } 00:09:22.983 ] 00:09:22.983 }' 00:09:22.983 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.984 16:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.244 [2024-11-28 16:22:14.970708] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.244 16:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.244 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.511 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.511 "name": "Existed_Raid", 00:09:23.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.511 "strip_size_kb": 64, 00:09:23.511 "state": "configuring", 00:09:23.511 "raid_level": "raid0", 00:09:23.511 "superblock": false, 00:09:23.511 "num_base_bdevs": 4, 00:09:23.511 "num_base_bdevs_discovered": 2, 00:09:23.511 "num_base_bdevs_operational": 4, 00:09:23.511 "base_bdevs_list": [ 00:09:23.511 { 00:09:23.511 "name": null, 00:09:23.511 "uuid": "f5dd455d-66b6-474f-96f5-23c5c7974713", 00:09:23.511 "is_configured": false, 00:09:23.511 "data_offset": 0, 00:09:23.511 "data_size": 65536 00:09:23.511 }, 00:09:23.511 { 00:09:23.511 "name": null, 00:09:23.511 "uuid": "623579ad-28e0-40a2-a273-1482fedd4e83", 00:09:23.511 "is_configured": false, 00:09:23.511 "data_offset": 0, 00:09:23.511 "data_size": 65536 00:09:23.511 }, 00:09:23.511 { 00:09:23.511 "name": "BaseBdev3", 00:09:23.511 "uuid": "aaf9eb3c-bb0b-431d-8dfd-c5136fd2b5ab", 00:09:23.511 "is_configured": true, 00:09:23.511 "data_offset": 0, 00:09:23.511 "data_size": 65536 00:09:23.511 }, 00:09:23.511 { 00:09:23.511 "name": "BaseBdev4", 00:09:23.511 "uuid": "fca7be3d-82dd-4010-b1ab-f02d6d0d4c62", 00:09:23.511 "is_configured": true, 00:09:23.511 "data_offset": 0, 00:09:23.511 "data_size": 65536 00:09:23.511 } 00:09:23.511 ] 00:09:23.511 }' 00:09:23.511 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.511 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.777 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:23.777 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.777 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.778 [2024-11-28 16:22:15.412151] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.778 "name": "Existed_Raid", 00:09:23.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.778 "strip_size_kb": 64, 00:09:23.778 "state": "configuring", 00:09:23.778 "raid_level": "raid0", 00:09:23.778 "superblock": false, 00:09:23.778 "num_base_bdevs": 4, 00:09:23.778 "num_base_bdevs_discovered": 3, 00:09:23.778 "num_base_bdevs_operational": 4, 00:09:23.778 "base_bdevs_list": [ 00:09:23.778 { 00:09:23.778 "name": null, 00:09:23.778 "uuid": "f5dd455d-66b6-474f-96f5-23c5c7974713", 00:09:23.778 "is_configured": false, 00:09:23.778 "data_offset": 0, 00:09:23.778 "data_size": 65536 00:09:23.778 }, 00:09:23.778 { 00:09:23.778 "name": "BaseBdev2", 00:09:23.778 "uuid": "623579ad-28e0-40a2-a273-1482fedd4e83", 00:09:23.778 "is_configured": true, 00:09:23.778 "data_offset": 0, 00:09:23.778 "data_size": 65536 00:09:23.778 }, 00:09:23.778 { 00:09:23.778 "name": "BaseBdev3", 00:09:23.778 "uuid": "aaf9eb3c-bb0b-431d-8dfd-c5136fd2b5ab", 00:09:23.778 "is_configured": true, 00:09:23.778 "data_offset": 0, 00:09:23.778 "data_size": 65536 00:09:23.778 }, 00:09:23.778 { 00:09:23.778 "name": "BaseBdev4", 00:09:23.778 "uuid": "fca7be3d-82dd-4010-b1ab-f02d6d0d4c62", 00:09:23.778 "is_configured": true, 00:09:23.778 "data_offset": 0, 00:09:23.778 "data_size": 65536 00:09:23.778 } 00:09:23.778 ] 00:09:23.778 }' 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.778 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.038 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:24.298 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.298 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.298 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.298 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.298 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:24.298 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.298 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:24.298 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.298 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.298 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.298 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f5dd455d-66b6-474f-96f5-23c5c7974713 00:09:24.298 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.298 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.298 [2024-11-28 16:22:15.918027] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:24.298 [2024-11-28 16:22:15.918076] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:24.298 [2024-11-28 16:22:15.918085] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:24.298 [2024-11-28 16:22:15.918337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:24.298 [2024-11-28 16:22:15.918455] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:24.298 [2024-11-28 16:22:15.918471] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:24.298 [2024-11-28 16:22:15.918636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.298 NewBaseBdev 00:09:24.298 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.298 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:24.298 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:24.298 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:24.298 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:24.298 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:24.298 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.299 [ 00:09:24.299 { 00:09:24.299 "name": "NewBaseBdev", 00:09:24.299 "aliases": [ 00:09:24.299 "f5dd455d-66b6-474f-96f5-23c5c7974713" 00:09:24.299 ], 00:09:24.299 "product_name": "Malloc disk", 00:09:24.299 "block_size": 512, 00:09:24.299 "num_blocks": 65536, 00:09:24.299 "uuid": "f5dd455d-66b6-474f-96f5-23c5c7974713", 00:09:24.299 "assigned_rate_limits": { 00:09:24.299 "rw_ios_per_sec": 0, 00:09:24.299 "rw_mbytes_per_sec": 0, 00:09:24.299 "r_mbytes_per_sec": 0, 00:09:24.299 "w_mbytes_per_sec": 0 00:09:24.299 }, 00:09:24.299 "claimed": true, 00:09:24.299 "claim_type": "exclusive_write", 00:09:24.299 "zoned": false, 00:09:24.299 "supported_io_types": { 00:09:24.299 "read": true, 00:09:24.299 "write": true, 00:09:24.299 "unmap": true, 00:09:24.299 "flush": true, 00:09:24.299 "reset": true, 00:09:24.299 "nvme_admin": false, 00:09:24.299 "nvme_io": false, 00:09:24.299 "nvme_io_md": false, 00:09:24.299 "write_zeroes": true, 00:09:24.299 "zcopy": true, 00:09:24.299 "get_zone_info": false, 00:09:24.299 "zone_management": false, 00:09:24.299 "zone_append": false, 00:09:24.299 "compare": false, 00:09:24.299 "compare_and_write": false, 00:09:24.299 "abort": true, 00:09:24.299 "seek_hole": false, 00:09:24.299 "seek_data": false, 00:09:24.299 "copy": true, 00:09:24.299 "nvme_iov_md": false 00:09:24.299 }, 00:09:24.299 "memory_domains": [ 00:09:24.299 { 00:09:24.299 "dma_device_id": "system", 00:09:24.299 "dma_device_type": 1 00:09:24.299 }, 00:09:24.299 { 00:09:24.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.299 "dma_device_type": 2 00:09:24.299 } 00:09:24.299 ], 00:09:24.299 "driver_specific": {} 00:09:24.299 } 00:09:24.299 ] 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.299 16:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.299 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.299 "name": "Existed_Raid", 00:09:24.299 "uuid": "953a7a18-c063-4825-9364-4e15ed6b2c9b", 00:09:24.299 "strip_size_kb": 64, 00:09:24.299 "state": "online", 00:09:24.299 "raid_level": "raid0", 00:09:24.299 "superblock": false, 00:09:24.299 "num_base_bdevs": 4, 00:09:24.299 "num_base_bdevs_discovered": 4, 00:09:24.299 "num_base_bdevs_operational": 4, 00:09:24.299 "base_bdevs_list": [ 00:09:24.299 { 00:09:24.299 "name": "NewBaseBdev", 00:09:24.299 "uuid": "f5dd455d-66b6-474f-96f5-23c5c7974713", 00:09:24.299 "is_configured": true, 00:09:24.299 "data_offset": 0, 00:09:24.299 "data_size": 65536 00:09:24.299 }, 00:09:24.299 { 00:09:24.299 "name": "BaseBdev2", 00:09:24.299 "uuid": "623579ad-28e0-40a2-a273-1482fedd4e83", 00:09:24.299 "is_configured": true, 00:09:24.299 "data_offset": 0, 00:09:24.299 "data_size": 65536 00:09:24.299 }, 00:09:24.299 { 00:09:24.299 "name": "BaseBdev3", 00:09:24.299 "uuid": "aaf9eb3c-bb0b-431d-8dfd-c5136fd2b5ab", 00:09:24.299 "is_configured": true, 00:09:24.299 "data_offset": 0, 00:09:24.299 "data_size": 65536 00:09:24.299 }, 00:09:24.299 { 00:09:24.299 "name": "BaseBdev4", 00:09:24.299 "uuid": "fca7be3d-82dd-4010-b1ab-f02d6d0d4c62", 00:09:24.299 "is_configured": true, 00:09:24.299 "data_offset": 0, 00:09:24.299 "data_size": 65536 00:09:24.299 } 00:09:24.299 ] 00:09:24.299 }' 00:09:24.299 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.299 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.870 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:24.870 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:24.870 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:24.870 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:24.870 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:24.870 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:24.870 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:24.870 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.870 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.870 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:24.870 [2024-11-28 16:22:16.369565] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:24.870 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.870 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:24.870 "name": "Existed_Raid", 00:09:24.870 "aliases": [ 00:09:24.870 "953a7a18-c063-4825-9364-4e15ed6b2c9b" 00:09:24.870 ], 00:09:24.870 "product_name": "Raid Volume", 00:09:24.870 "block_size": 512, 00:09:24.870 "num_blocks": 262144, 00:09:24.870 "uuid": "953a7a18-c063-4825-9364-4e15ed6b2c9b", 00:09:24.870 "assigned_rate_limits": { 00:09:24.870 "rw_ios_per_sec": 0, 00:09:24.870 "rw_mbytes_per_sec": 0, 00:09:24.870 "r_mbytes_per_sec": 0, 00:09:24.870 "w_mbytes_per_sec": 0 00:09:24.870 }, 00:09:24.870 "claimed": false, 00:09:24.870 "zoned": false, 00:09:24.870 "supported_io_types": { 00:09:24.870 "read": true, 00:09:24.870 "write": true, 00:09:24.870 "unmap": true, 00:09:24.870 "flush": true, 00:09:24.870 "reset": true, 00:09:24.870 "nvme_admin": false, 00:09:24.870 "nvme_io": false, 00:09:24.870 "nvme_io_md": false, 00:09:24.870 "write_zeroes": true, 00:09:24.870 "zcopy": false, 00:09:24.870 "get_zone_info": false, 00:09:24.870 "zone_management": false, 00:09:24.870 "zone_append": false, 00:09:24.870 "compare": false, 00:09:24.870 "compare_and_write": false, 00:09:24.870 "abort": false, 00:09:24.870 "seek_hole": false, 00:09:24.870 "seek_data": false, 00:09:24.870 "copy": false, 00:09:24.870 "nvme_iov_md": false 00:09:24.870 }, 00:09:24.871 "memory_domains": [ 00:09:24.871 { 00:09:24.871 "dma_device_id": "system", 00:09:24.871 "dma_device_type": 1 00:09:24.871 }, 00:09:24.871 { 00:09:24.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.871 "dma_device_type": 2 00:09:24.871 }, 00:09:24.871 { 00:09:24.871 "dma_device_id": "system", 00:09:24.871 "dma_device_type": 1 00:09:24.871 }, 00:09:24.871 { 00:09:24.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.871 "dma_device_type": 2 00:09:24.871 }, 00:09:24.871 { 00:09:24.871 "dma_device_id": "system", 00:09:24.871 "dma_device_type": 1 00:09:24.871 }, 00:09:24.871 { 00:09:24.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.871 "dma_device_type": 2 00:09:24.871 }, 00:09:24.871 { 00:09:24.871 "dma_device_id": "system", 00:09:24.871 "dma_device_type": 1 00:09:24.871 }, 00:09:24.871 { 00:09:24.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.871 "dma_device_type": 2 00:09:24.871 } 00:09:24.871 ], 00:09:24.871 "driver_specific": { 00:09:24.871 "raid": { 00:09:24.871 "uuid": "953a7a18-c063-4825-9364-4e15ed6b2c9b", 00:09:24.871 "strip_size_kb": 64, 00:09:24.871 "state": "online", 00:09:24.871 "raid_level": "raid0", 00:09:24.871 "superblock": false, 00:09:24.871 "num_base_bdevs": 4, 00:09:24.871 "num_base_bdevs_discovered": 4, 00:09:24.871 "num_base_bdevs_operational": 4, 00:09:24.871 "base_bdevs_list": [ 00:09:24.871 { 00:09:24.871 "name": "NewBaseBdev", 00:09:24.871 "uuid": "f5dd455d-66b6-474f-96f5-23c5c7974713", 00:09:24.871 "is_configured": true, 00:09:24.871 "data_offset": 0, 00:09:24.871 "data_size": 65536 00:09:24.871 }, 00:09:24.871 { 00:09:24.871 "name": "BaseBdev2", 00:09:24.871 "uuid": "623579ad-28e0-40a2-a273-1482fedd4e83", 00:09:24.871 "is_configured": true, 00:09:24.871 "data_offset": 0, 00:09:24.871 "data_size": 65536 00:09:24.871 }, 00:09:24.871 { 00:09:24.871 "name": "BaseBdev3", 00:09:24.871 "uuid": "aaf9eb3c-bb0b-431d-8dfd-c5136fd2b5ab", 00:09:24.871 "is_configured": true, 00:09:24.871 "data_offset": 0, 00:09:24.871 "data_size": 65536 00:09:24.871 }, 00:09:24.871 { 00:09:24.871 "name": "BaseBdev4", 00:09:24.871 "uuid": "fca7be3d-82dd-4010-b1ab-f02d6d0d4c62", 00:09:24.871 "is_configured": true, 00:09:24.871 "data_offset": 0, 00:09:24.871 "data_size": 65536 00:09:24.871 } 00:09:24.871 ] 00:09:24.871 } 00:09:24.871 } 00:09:24.871 }' 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:24.871 BaseBdev2 00:09:24.871 BaseBdev3 00:09:24.871 BaseBdev4' 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.871 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.132 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.132 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.132 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.132 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:25.132 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.132 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.132 [2024-11-28 16:22:16.656887] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:25.132 [2024-11-28 16:22:16.656961] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.132 [2024-11-28 16:22:16.657023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.132 [2024-11-28 16:22:16.657078] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.132 [2024-11-28 16:22:16.657087] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:25.132 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.132 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80376 00:09:25.132 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80376 ']' 00:09:25.132 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80376 00:09:25.132 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:25.132 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:25.132 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80376 00:09:25.132 killing process with pid 80376 00:09:25.132 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:25.132 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:25.132 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80376' 00:09:25.132 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 80376 00:09:25.132 [2024-11-28 16:22:16.703449] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:25.132 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 80376 00:09:25.132 [2024-11-28 16:22:16.744238] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:25.393 16:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:25.393 00:09:25.393 real 0m9.372s 00:09:25.393 user 0m15.997s 00:09:25.393 sys 0m1.961s 00:09:25.393 ************************************ 00:09:25.393 END TEST raid_state_function_test 00:09:25.393 ************************************ 00:09:25.393 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.393 16:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.393 16:22:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:25.393 16:22:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:25.393 16:22:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.393 16:22:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:25.393 ************************************ 00:09:25.393 START TEST raid_state_function_test_sb 00:09:25.393 ************************************ 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:25.393 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:25.394 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:25.394 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:25.394 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:25.394 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:25.394 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:25.394 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81025 00:09:25.394 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:25.394 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81025' 00:09:25.394 Process raid pid: 81025 00:09:25.394 16:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81025 00:09:25.394 16:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 81025 ']' 00:09:25.394 16:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.394 16:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:25.394 16:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.394 16:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:25.394 16:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.394 [2024-11-28 16:22:17.149510] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:25.394 [2024-11-28 16:22:17.149736] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.655 [2024-11-28 16:22:17.308593] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.655 [2024-11-28 16:22:17.352943] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.655 [2024-11-28 16:22:17.394160] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.655 [2024-11-28 16:22:17.394279] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.595 [2024-11-28 16:22:18.027074] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:26.595 [2024-11-28 16:22:18.027137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:26.595 [2024-11-28 16:22:18.027149] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:26.595 [2024-11-28 16:22:18.027159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:26.595 [2024-11-28 16:22:18.027165] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:26.595 [2024-11-28 16:22:18.027176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:26.595 [2024-11-28 16:22:18.027182] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:26.595 [2024-11-28 16:22:18.027191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.595 "name": "Existed_Raid", 00:09:26.595 "uuid": "2bc71434-f56d-431d-b849-6b49531216cb", 00:09:26.595 "strip_size_kb": 64, 00:09:26.595 "state": "configuring", 00:09:26.595 "raid_level": "raid0", 00:09:26.595 "superblock": true, 00:09:26.595 "num_base_bdevs": 4, 00:09:26.595 "num_base_bdevs_discovered": 0, 00:09:26.595 "num_base_bdevs_operational": 4, 00:09:26.595 "base_bdevs_list": [ 00:09:26.595 { 00:09:26.595 "name": "BaseBdev1", 00:09:26.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.595 "is_configured": false, 00:09:26.595 "data_offset": 0, 00:09:26.595 "data_size": 0 00:09:26.595 }, 00:09:26.595 { 00:09:26.595 "name": "BaseBdev2", 00:09:26.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.595 "is_configured": false, 00:09:26.595 "data_offset": 0, 00:09:26.595 "data_size": 0 00:09:26.595 }, 00:09:26.595 { 00:09:26.595 "name": "BaseBdev3", 00:09:26.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.595 "is_configured": false, 00:09:26.595 "data_offset": 0, 00:09:26.595 "data_size": 0 00:09:26.595 }, 00:09:26.595 { 00:09:26.595 "name": "BaseBdev4", 00:09:26.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.595 "is_configured": false, 00:09:26.595 "data_offset": 0, 00:09:26.595 "data_size": 0 00:09:26.595 } 00:09:26.595 ] 00:09:26.595 }' 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.595 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.856 [2024-11-28 16:22:18.394334] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:26.856 [2024-11-28 16:22:18.394379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.856 [2024-11-28 16:22:18.402373] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:26.856 [2024-11-28 16:22:18.402415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:26.856 [2024-11-28 16:22:18.402423] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:26.856 [2024-11-28 16:22:18.402432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:26.856 [2024-11-28 16:22:18.402438] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:26.856 [2024-11-28 16:22:18.402446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:26.856 [2024-11-28 16:22:18.402452] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:26.856 [2024-11-28 16:22:18.402460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.856 [2024-11-28 16:22:18.419000] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.856 BaseBdev1 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.856 [ 00:09:26.856 { 00:09:26.856 "name": "BaseBdev1", 00:09:26.856 "aliases": [ 00:09:26.856 "900a2a75-a5af-498f-b5b0-2c58ebece819" 00:09:26.856 ], 00:09:26.856 "product_name": "Malloc disk", 00:09:26.856 "block_size": 512, 00:09:26.856 "num_blocks": 65536, 00:09:26.856 "uuid": "900a2a75-a5af-498f-b5b0-2c58ebece819", 00:09:26.856 "assigned_rate_limits": { 00:09:26.856 "rw_ios_per_sec": 0, 00:09:26.856 "rw_mbytes_per_sec": 0, 00:09:26.856 "r_mbytes_per_sec": 0, 00:09:26.856 "w_mbytes_per_sec": 0 00:09:26.856 }, 00:09:26.856 "claimed": true, 00:09:26.856 "claim_type": "exclusive_write", 00:09:26.856 "zoned": false, 00:09:26.856 "supported_io_types": { 00:09:26.856 "read": true, 00:09:26.856 "write": true, 00:09:26.856 "unmap": true, 00:09:26.856 "flush": true, 00:09:26.856 "reset": true, 00:09:26.856 "nvme_admin": false, 00:09:26.856 "nvme_io": false, 00:09:26.856 "nvme_io_md": false, 00:09:26.856 "write_zeroes": true, 00:09:26.856 "zcopy": true, 00:09:26.856 "get_zone_info": false, 00:09:26.856 "zone_management": false, 00:09:26.856 "zone_append": false, 00:09:26.856 "compare": false, 00:09:26.856 "compare_and_write": false, 00:09:26.856 "abort": true, 00:09:26.856 "seek_hole": false, 00:09:26.856 "seek_data": false, 00:09:26.856 "copy": true, 00:09:26.856 "nvme_iov_md": false 00:09:26.856 }, 00:09:26.856 "memory_domains": [ 00:09:26.856 { 00:09:26.856 "dma_device_id": "system", 00:09:26.856 "dma_device_type": 1 00:09:26.856 }, 00:09:26.856 { 00:09:26.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.856 "dma_device_type": 2 00:09:26.856 } 00:09:26.856 ], 00:09:26.856 "driver_specific": {} 00:09:26.856 } 00:09:26.856 ] 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:26.856 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.857 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.857 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.857 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.857 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:26.857 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.857 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.857 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.857 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.857 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.857 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.857 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.857 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.857 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.857 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.857 "name": "Existed_Raid", 00:09:26.857 "uuid": "6efc2346-229d-4dae-b9a0-81afe650fb15", 00:09:26.857 "strip_size_kb": 64, 00:09:26.857 "state": "configuring", 00:09:26.857 "raid_level": "raid0", 00:09:26.857 "superblock": true, 00:09:26.857 "num_base_bdevs": 4, 00:09:26.857 "num_base_bdevs_discovered": 1, 00:09:26.857 "num_base_bdevs_operational": 4, 00:09:26.857 "base_bdevs_list": [ 00:09:26.857 { 00:09:26.857 "name": "BaseBdev1", 00:09:26.857 "uuid": "900a2a75-a5af-498f-b5b0-2c58ebece819", 00:09:26.857 "is_configured": true, 00:09:26.857 "data_offset": 2048, 00:09:26.857 "data_size": 63488 00:09:26.857 }, 00:09:26.857 { 00:09:26.857 "name": "BaseBdev2", 00:09:26.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.857 "is_configured": false, 00:09:26.857 "data_offset": 0, 00:09:26.857 "data_size": 0 00:09:26.857 }, 00:09:26.857 { 00:09:26.857 "name": "BaseBdev3", 00:09:26.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.857 "is_configured": false, 00:09:26.857 "data_offset": 0, 00:09:26.857 "data_size": 0 00:09:26.857 }, 00:09:26.857 { 00:09:26.857 "name": "BaseBdev4", 00:09:26.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.857 "is_configured": false, 00:09:26.857 "data_offset": 0, 00:09:26.857 "data_size": 0 00:09:26.857 } 00:09:26.857 ] 00:09:26.857 }' 00:09:26.857 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.857 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.427 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:27.427 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.427 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.427 [2024-11-28 16:22:18.934156] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:27.427 [2024-11-28 16:22:18.934256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:27.427 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.427 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:27.427 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.427 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.427 [2024-11-28 16:22:18.946183] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.427 [2024-11-28 16:22:18.948037] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:27.427 [2024-11-28 16:22:18.948111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:27.427 [2024-11-28 16:22:18.948124] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:27.427 [2024-11-28 16:22:18.948133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:27.427 [2024-11-28 16:22:18.948139] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:27.427 [2024-11-28 16:22:18.948147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:27.427 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.428 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:27.428 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:27.428 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:27.428 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.428 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.428 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.428 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.428 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:27.428 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.428 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.428 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.428 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.428 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.428 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.428 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.428 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.428 16:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.428 16:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.428 "name": "Existed_Raid", 00:09:27.428 "uuid": "eb7cd968-4e2f-47be-bc84-c92969b3cc79", 00:09:27.428 "strip_size_kb": 64, 00:09:27.428 "state": "configuring", 00:09:27.428 "raid_level": "raid0", 00:09:27.428 "superblock": true, 00:09:27.428 "num_base_bdevs": 4, 00:09:27.428 "num_base_bdevs_discovered": 1, 00:09:27.428 "num_base_bdevs_operational": 4, 00:09:27.428 "base_bdevs_list": [ 00:09:27.428 { 00:09:27.428 "name": "BaseBdev1", 00:09:27.428 "uuid": "900a2a75-a5af-498f-b5b0-2c58ebece819", 00:09:27.428 "is_configured": true, 00:09:27.428 "data_offset": 2048, 00:09:27.428 "data_size": 63488 00:09:27.428 }, 00:09:27.428 { 00:09:27.428 "name": "BaseBdev2", 00:09:27.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.428 "is_configured": false, 00:09:27.428 "data_offset": 0, 00:09:27.428 "data_size": 0 00:09:27.428 }, 00:09:27.428 { 00:09:27.428 "name": "BaseBdev3", 00:09:27.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.428 "is_configured": false, 00:09:27.428 "data_offset": 0, 00:09:27.428 "data_size": 0 00:09:27.428 }, 00:09:27.428 { 00:09:27.428 "name": "BaseBdev4", 00:09:27.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.428 "is_configured": false, 00:09:27.428 "data_offset": 0, 00:09:27.428 "data_size": 0 00:09:27.428 } 00:09:27.428 ] 00:09:27.428 }' 00:09:27.428 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.428 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.688 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:27.688 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.688 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.688 [2024-11-28 16:22:19.353762] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.688 BaseBdev2 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.689 [ 00:09:27.689 { 00:09:27.689 "name": "BaseBdev2", 00:09:27.689 "aliases": [ 00:09:27.689 "1b7f224c-3d50-4196-ba81-652d6ff4b554" 00:09:27.689 ], 00:09:27.689 "product_name": "Malloc disk", 00:09:27.689 "block_size": 512, 00:09:27.689 "num_blocks": 65536, 00:09:27.689 "uuid": "1b7f224c-3d50-4196-ba81-652d6ff4b554", 00:09:27.689 "assigned_rate_limits": { 00:09:27.689 "rw_ios_per_sec": 0, 00:09:27.689 "rw_mbytes_per_sec": 0, 00:09:27.689 "r_mbytes_per_sec": 0, 00:09:27.689 "w_mbytes_per_sec": 0 00:09:27.689 }, 00:09:27.689 "claimed": true, 00:09:27.689 "claim_type": "exclusive_write", 00:09:27.689 "zoned": false, 00:09:27.689 "supported_io_types": { 00:09:27.689 "read": true, 00:09:27.689 "write": true, 00:09:27.689 "unmap": true, 00:09:27.689 "flush": true, 00:09:27.689 "reset": true, 00:09:27.689 "nvme_admin": false, 00:09:27.689 "nvme_io": false, 00:09:27.689 "nvme_io_md": false, 00:09:27.689 "write_zeroes": true, 00:09:27.689 "zcopy": true, 00:09:27.689 "get_zone_info": false, 00:09:27.689 "zone_management": false, 00:09:27.689 "zone_append": false, 00:09:27.689 "compare": false, 00:09:27.689 "compare_and_write": false, 00:09:27.689 "abort": true, 00:09:27.689 "seek_hole": false, 00:09:27.689 "seek_data": false, 00:09:27.689 "copy": true, 00:09:27.689 "nvme_iov_md": false 00:09:27.689 }, 00:09:27.689 "memory_domains": [ 00:09:27.689 { 00:09:27.689 "dma_device_id": "system", 00:09:27.689 "dma_device_type": 1 00:09:27.689 }, 00:09:27.689 { 00:09:27.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.689 "dma_device_type": 2 00:09:27.689 } 00:09:27.689 ], 00:09:27.689 "driver_specific": {} 00:09:27.689 } 00:09:27.689 ] 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.689 "name": "Existed_Raid", 00:09:27.689 "uuid": "eb7cd968-4e2f-47be-bc84-c92969b3cc79", 00:09:27.689 "strip_size_kb": 64, 00:09:27.689 "state": "configuring", 00:09:27.689 "raid_level": "raid0", 00:09:27.689 "superblock": true, 00:09:27.689 "num_base_bdevs": 4, 00:09:27.689 "num_base_bdevs_discovered": 2, 00:09:27.689 "num_base_bdevs_operational": 4, 00:09:27.689 "base_bdevs_list": [ 00:09:27.689 { 00:09:27.689 "name": "BaseBdev1", 00:09:27.689 "uuid": "900a2a75-a5af-498f-b5b0-2c58ebece819", 00:09:27.689 "is_configured": true, 00:09:27.689 "data_offset": 2048, 00:09:27.689 "data_size": 63488 00:09:27.689 }, 00:09:27.689 { 00:09:27.689 "name": "BaseBdev2", 00:09:27.689 "uuid": "1b7f224c-3d50-4196-ba81-652d6ff4b554", 00:09:27.689 "is_configured": true, 00:09:27.689 "data_offset": 2048, 00:09:27.689 "data_size": 63488 00:09:27.689 }, 00:09:27.689 { 00:09:27.689 "name": "BaseBdev3", 00:09:27.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.689 "is_configured": false, 00:09:27.689 "data_offset": 0, 00:09:27.689 "data_size": 0 00:09:27.689 }, 00:09:27.689 { 00:09:27.689 "name": "BaseBdev4", 00:09:27.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.689 "is_configured": false, 00:09:27.689 "data_offset": 0, 00:09:27.689 "data_size": 0 00:09:27.689 } 00:09:27.689 ] 00:09:27.689 }' 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.689 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.259 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:28.259 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.259 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.259 [2024-11-28 16:22:19.823779] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:28.259 BaseBdev3 00:09:28.259 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.259 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:28.259 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:28.259 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:28.259 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:28.259 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:28.259 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:28.259 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:28.259 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.259 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.259 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.259 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:28.259 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.259 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.259 [ 00:09:28.260 { 00:09:28.260 "name": "BaseBdev3", 00:09:28.260 "aliases": [ 00:09:28.260 "a0469a6e-d9d7-471d-ae0d-1af2a90596fc" 00:09:28.260 ], 00:09:28.260 "product_name": "Malloc disk", 00:09:28.260 "block_size": 512, 00:09:28.260 "num_blocks": 65536, 00:09:28.260 "uuid": "a0469a6e-d9d7-471d-ae0d-1af2a90596fc", 00:09:28.260 "assigned_rate_limits": { 00:09:28.260 "rw_ios_per_sec": 0, 00:09:28.260 "rw_mbytes_per_sec": 0, 00:09:28.260 "r_mbytes_per_sec": 0, 00:09:28.260 "w_mbytes_per_sec": 0 00:09:28.260 }, 00:09:28.260 "claimed": true, 00:09:28.260 "claim_type": "exclusive_write", 00:09:28.260 "zoned": false, 00:09:28.260 "supported_io_types": { 00:09:28.260 "read": true, 00:09:28.260 "write": true, 00:09:28.260 "unmap": true, 00:09:28.260 "flush": true, 00:09:28.260 "reset": true, 00:09:28.260 "nvme_admin": false, 00:09:28.260 "nvme_io": false, 00:09:28.260 "nvme_io_md": false, 00:09:28.260 "write_zeroes": true, 00:09:28.260 "zcopy": true, 00:09:28.260 "get_zone_info": false, 00:09:28.260 "zone_management": false, 00:09:28.260 "zone_append": false, 00:09:28.260 "compare": false, 00:09:28.260 "compare_and_write": false, 00:09:28.260 "abort": true, 00:09:28.260 "seek_hole": false, 00:09:28.260 "seek_data": false, 00:09:28.260 "copy": true, 00:09:28.260 "nvme_iov_md": false 00:09:28.260 }, 00:09:28.260 "memory_domains": [ 00:09:28.260 { 00:09:28.260 "dma_device_id": "system", 00:09:28.260 "dma_device_type": 1 00:09:28.260 }, 00:09:28.260 { 00:09:28.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.260 "dma_device_type": 2 00:09:28.260 } 00:09:28.260 ], 00:09:28.260 "driver_specific": {} 00:09:28.260 } 00:09:28.260 ] 00:09:28.260 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.260 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:28.260 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:28.260 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:28.260 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:28.260 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.260 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.260 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:28.260 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.260 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:28.260 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.260 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.260 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.260 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.260 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.260 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.260 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.260 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.260 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.260 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.260 "name": "Existed_Raid", 00:09:28.260 "uuid": "eb7cd968-4e2f-47be-bc84-c92969b3cc79", 00:09:28.260 "strip_size_kb": 64, 00:09:28.260 "state": "configuring", 00:09:28.260 "raid_level": "raid0", 00:09:28.260 "superblock": true, 00:09:28.260 "num_base_bdevs": 4, 00:09:28.260 "num_base_bdevs_discovered": 3, 00:09:28.260 "num_base_bdevs_operational": 4, 00:09:28.260 "base_bdevs_list": [ 00:09:28.260 { 00:09:28.260 "name": "BaseBdev1", 00:09:28.260 "uuid": "900a2a75-a5af-498f-b5b0-2c58ebece819", 00:09:28.260 "is_configured": true, 00:09:28.260 "data_offset": 2048, 00:09:28.260 "data_size": 63488 00:09:28.260 }, 00:09:28.260 { 00:09:28.260 "name": "BaseBdev2", 00:09:28.260 "uuid": "1b7f224c-3d50-4196-ba81-652d6ff4b554", 00:09:28.260 "is_configured": true, 00:09:28.260 "data_offset": 2048, 00:09:28.260 "data_size": 63488 00:09:28.260 }, 00:09:28.260 { 00:09:28.260 "name": "BaseBdev3", 00:09:28.260 "uuid": "a0469a6e-d9d7-471d-ae0d-1af2a90596fc", 00:09:28.260 "is_configured": true, 00:09:28.260 "data_offset": 2048, 00:09:28.260 "data_size": 63488 00:09:28.260 }, 00:09:28.260 { 00:09:28.260 "name": "BaseBdev4", 00:09:28.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.260 "is_configured": false, 00:09:28.260 "data_offset": 0, 00:09:28.260 "data_size": 0 00:09:28.260 } 00:09:28.260 ] 00:09:28.260 }' 00:09:28.260 16:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.260 16:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.831 [2024-11-28 16:22:20.377745] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:28.831 [2024-11-28 16:22:20.378080] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:28.831 [2024-11-28 16:22:20.378134] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:28.831 [2024-11-28 16:22:20.378421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:28.831 [2024-11-28 16:22:20.378579] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:28.831 BaseBdev4 00:09:28.831 [2024-11-28 16:22:20.378627] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:28.831 [2024-11-28 16:22:20.378752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.831 [ 00:09:28.831 { 00:09:28.831 "name": "BaseBdev4", 00:09:28.831 "aliases": [ 00:09:28.831 "8fe4d0d7-54e7-49dc-aba1-4c7a2dd04c2b" 00:09:28.831 ], 00:09:28.831 "product_name": "Malloc disk", 00:09:28.831 "block_size": 512, 00:09:28.831 "num_blocks": 65536, 00:09:28.831 "uuid": "8fe4d0d7-54e7-49dc-aba1-4c7a2dd04c2b", 00:09:28.831 "assigned_rate_limits": { 00:09:28.831 "rw_ios_per_sec": 0, 00:09:28.831 "rw_mbytes_per_sec": 0, 00:09:28.831 "r_mbytes_per_sec": 0, 00:09:28.831 "w_mbytes_per_sec": 0 00:09:28.831 }, 00:09:28.831 "claimed": true, 00:09:28.831 "claim_type": "exclusive_write", 00:09:28.831 "zoned": false, 00:09:28.831 "supported_io_types": { 00:09:28.831 "read": true, 00:09:28.831 "write": true, 00:09:28.831 "unmap": true, 00:09:28.831 "flush": true, 00:09:28.831 "reset": true, 00:09:28.831 "nvme_admin": false, 00:09:28.831 "nvme_io": false, 00:09:28.831 "nvme_io_md": false, 00:09:28.831 "write_zeroes": true, 00:09:28.831 "zcopy": true, 00:09:28.831 "get_zone_info": false, 00:09:28.831 "zone_management": false, 00:09:28.831 "zone_append": false, 00:09:28.831 "compare": false, 00:09:28.831 "compare_and_write": false, 00:09:28.831 "abort": true, 00:09:28.831 "seek_hole": false, 00:09:28.831 "seek_data": false, 00:09:28.831 "copy": true, 00:09:28.831 "nvme_iov_md": false 00:09:28.831 }, 00:09:28.831 "memory_domains": [ 00:09:28.831 { 00:09:28.831 "dma_device_id": "system", 00:09:28.831 "dma_device_type": 1 00:09:28.831 }, 00:09:28.831 { 00:09:28.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.831 "dma_device_type": 2 00:09:28.831 } 00:09:28.831 ], 00:09:28.831 "driver_specific": {} 00:09:28.831 } 00:09:28.831 ] 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.831 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.832 "name": "Existed_Raid", 00:09:28.832 "uuid": "eb7cd968-4e2f-47be-bc84-c92969b3cc79", 00:09:28.832 "strip_size_kb": 64, 00:09:28.832 "state": "online", 00:09:28.832 "raid_level": "raid0", 00:09:28.832 "superblock": true, 00:09:28.832 "num_base_bdevs": 4, 00:09:28.832 "num_base_bdevs_discovered": 4, 00:09:28.832 "num_base_bdevs_operational": 4, 00:09:28.832 "base_bdevs_list": [ 00:09:28.832 { 00:09:28.832 "name": "BaseBdev1", 00:09:28.832 "uuid": "900a2a75-a5af-498f-b5b0-2c58ebece819", 00:09:28.832 "is_configured": true, 00:09:28.832 "data_offset": 2048, 00:09:28.832 "data_size": 63488 00:09:28.832 }, 00:09:28.832 { 00:09:28.832 "name": "BaseBdev2", 00:09:28.832 "uuid": "1b7f224c-3d50-4196-ba81-652d6ff4b554", 00:09:28.832 "is_configured": true, 00:09:28.832 "data_offset": 2048, 00:09:28.832 "data_size": 63488 00:09:28.832 }, 00:09:28.832 { 00:09:28.832 "name": "BaseBdev3", 00:09:28.832 "uuid": "a0469a6e-d9d7-471d-ae0d-1af2a90596fc", 00:09:28.832 "is_configured": true, 00:09:28.832 "data_offset": 2048, 00:09:28.832 "data_size": 63488 00:09:28.832 }, 00:09:28.832 { 00:09:28.832 "name": "BaseBdev4", 00:09:28.832 "uuid": "8fe4d0d7-54e7-49dc-aba1-4c7a2dd04c2b", 00:09:28.832 "is_configured": true, 00:09:28.832 "data_offset": 2048, 00:09:28.832 "data_size": 63488 00:09:28.832 } 00:09:28.832 ] 00:09:28.832 }' 00:09:28.832 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.832 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.401 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:29.401 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:29.401 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:29.401 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:29.401 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:29.401 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:29.401 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:29.401 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:29.401 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.401 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.401 [2024-11-28 16:22:20.873239] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.401 16:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.401 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:29.401 "name": "Existed_Raid", 00:09:29.401 "aliases": [ 00:09:29.401 "eb7cd968-4e2f-47be-bc84-c92969b3cc79" 00:09:29.401 ], 00:09:29.401 "product_name": "Raid Volume", 00:09:29.401 "block_size": 512, 00:09:29.401 "num_blocks": 253952, 00:09:29.401 "uuid": "eb7cd968-4e2f-47be-bc84-c92969b3cc79", 00:09:29.401 "assigned_rate_limits": { 00:09:29.401 "rw_ios_per_sec": 0, 00:09:29.401 "rw_mbytes_per_sec": 0, 00:09:29.401 "r_mbytes_per_sec": 0, 00:09:29.401 "w_mbytes_per_sec": 0 00:09:29.401 }, 00:09:29.401 "claimed": false, 00:09:29.401 "zoned": false, 00:09:29.401 "supported_io_types": { 00:09:29.401 "read": true, 00:09:29.401 "write": true, 00:09:29.401 "unmap": true, 00:09:29.401 "flush": true, 00:09:29.401 "reset": true, 00:09:29.401 "nvme_admin": false, 00:09:29.401 "nvme_io": false, 00:09:29.401 "nvme_io_md": false, 00:09:29.401 "write_zeroes": true, 00:09:29.401 "zcopy": false, 00:09:29.401 "get_zone_info": false, 00:09:29.401 "zone_management": false, 00:09:29.401 "zone_append": false, 00:09:29.401 "compare": false, 00:09:29.401 "compare_and_write": false, 00:09:29.401 "abort": false, 00:09:29.401 "seek_hole": false, 00:09:29.401 "seek_data": false, 00:09:29.401 "copy": false, 00:09:29.401 "nvme_iov_md": false 00:09:29.401 }, 00:09:29.401 "memory_domains": [ 00:09:29.401 { 00:09:29.401 "dma_device_id": "system", 00:09:29.401 "dma_device_type": 1 00:09:29.401 }, 00:09:29.401 { 00:09:29.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.401 "dma_device_type": 2 00:09:29.401 }, 00:09:29.401 { 00:09:29.401 "dma_device_id": "system", 00:09:29.401 "dma_device_type": 1 00:09:29.401 }, 00:09:29.401 { 00:09:29.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.401 "dma_device_type": 2 00:09:29.401 }, 00:09:29.401 { 00:09:29.401 "dma_device_id": "system", 00:09:29.401 "dma_device_type": 1 00:09:29.401 }, 00:09:29.401 { 00:09:29.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.401 "dma_device_type": 2 00:09:29.401 }, 00:09:29.401 { 00:09:29.401 "dma_device_id": "system", 00:09:29.401 "dma_device_type": 1 00:09:29.401 }, 00:09:29.401 { 00:09:29.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.401 "dma_device_type": 2 00:09:29.401 } 00:09:29.401 ], 00:09:29.401 "driver_specific": { 00:09:29.401 "raid": { 00:09:29.401 "uuid": "eb7cd968-4e2f-47be-bc84-c92969b3cc79", 00:09:29.401 "strip_size_kb": 64, 00:09:29.401 "state": "online", 00:09:29.401 "raid_level": "raid0", 00:09:29.401 "superblock": true, 00:09:29.401 "num_base_bdevs": 4, 00:09:29.401 "num_base_bdevs_discovered": 4, 00:09:29.401 "num_base_bdevs_operational": 4, 00:09:29.401 "base_bdevs_list": [ 00:09:29.401 { 00:09:29.401 "name": "BaseBdev1", 00:09:29.401 "uuid": "900a2a75-a5af-498f-b5b0-2c58ebece819", 00:09:29.401 "is_configured": true, 00:09:29.401 "data_offset": 2048, 00:09:29.401 "data_size": 63488 00:09:29.402 }, 00:09:29.402 { 00:09:29.402 "name": "BaseBdev2", 00:09:29.402 "uuid": "1b7f224c-3d50-4196-ba81-652d6ff4b554", 00:09:29.402 "is_configured": true, 00:09:29.402 "data_offset": 2048, 00:09:29.402 "data_size": 63488 00:09:29.402 }, 00:09:29.402 { 00:09:29.402 "name": "BaseBdev3", 00:09:29.402 "uuid": "a0469a6e-d9d7-471d-ae0d-1af2a90596fc", 00:09:29.402 "is_configured": true, 00:09:29.402 "data_offset": 2048, 00:09:29.402 "data_size": 63488 00:09:29.402 }, 00:09:29.402 { 00:09:29.402 "name": "BaseBdev4", 00:09:29.402 "uuid": "8fe4d0d7-54e7-49dc-aba1-4c7a2dd04c2b", 00:09:29.402 "is_configured": true, 00:09:29.402 "data_offset": 2048, 00:09:29.402 "data_size": 63488 00:09:29.402 } 00:09:29.402 ] 00:09:29.402 } 00:09:29.402 } 00:09:29.402 }' 00:09:29.402 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:29.402 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:29.402 BaseBdev2 00:09:29.402 BaseBdev3 00:09:29.402 BaseBdev4' 00:09:29.402 16:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.402 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.662 [2024-11-28 16:22:21.208449] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:29.662 [2024-11-28 16:22:21.208485] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.662 [2024-11-28 16:22:21.208546] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.662 "name": "Existed_Raid", 00:09:29.662 "uuid": "eb7cd968-4e2f-47be-bc84-c92969b3cc79", 00:09:29.662 "strip_size_kb": 64, 00:09:29.662 "state": "offline", 00:09:29.662 "raid_level": "raid0", 00:09:29.662 "superblock": true, 00:09:29.662 "num_base_bdevs": 4, 00:09:29.662 "num_base_bdevs_discovered": 3, 00:09:29.662 "num_base_bdevs_operational": 3, 00:09:29.662 "base_bdevs_list": [ 00:09:29.662 { 00:09:29.662 "name": null, 00:09:29.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.662 "is_configured": false, 00:09:29.662 "data_offset": 0, 00:09:29.662 "data_size": 63488 00:09:29.662 }, 00:09:29.662 { 00:09:29.662 "name": "BaseBdev2", 00:09:29.662 "uuid": "1b7f224c-3d50-4196-ba81-652d6ff4b554", 00:09:29.662 "is_configured": true, 00:09:29.662 "data_offset": 2048, 00:09:29.662 "data_size": 63488 00:09:29.662 }, 00:09:29.662 { 00:09:29.662 "name": "BaseBdev3", 00:09:29.662 "uuid": "a0469a6e-d9d7-471d-ae0d-1af2a90596fc", 00:09:29.662 "is_configured": true, 00:09:29.662 "data_offset": 2048, 00:09:29.662 "data_size": 63488 00:09:29.662 }, 00:09:29.662 { 00:09:29.662 "name": "BaseBdev4", 00:09:29.662 "uuid": "8fe4d0d7-54e7-49dc-aba1-4c7a2dd04c2b", 00:09:29.662 "is_configured": true, 00:09:29.662 "data_offset": 2048, 00:09:29.662 "data_size": 63488 00:09:29.662 } 00:09:29.662 ] 00:09:29.662 }' 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.662 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.233 [2024-11-28 16:22:21.746667] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.233 [2024-11-28 16:22:21.813563] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.233 [2024-11-28 16:22:21.864588] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:30.233 [2024-11-28 16:22:21.864681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.233 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.234 BaseBdev2 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.234 [ 00:09:30.234 { 00:09:30.234 "name": "BaseBdev2", 00:09:30.234 "aliases": [ 00:09:30.234 "4afacd03-27a3-45e7-9536-e5ac6eec6ec6" 00:09:30.234 ], 00:09:30.234 "product_name": "Malloc disk", 00:09:30.234 "block_size": 512, 00:09:30.234 "num_blocks": 65536, 00:09:30.234 "uuid": "4afacd03-27a3-45e7-9536-e5ac6eec6ec6", 00:09:30.234 "assigned_rate_limits": { 00:09:30.234 "rw_ios_per_sec": 0, 00:09:30.234 "rw_mbytes_per_sec": 0, 00:09:30.234 "r_mbytes_per_sec": 0, 00:09:30.234 "w_mbytes_per_sec": 0 00:09:30.234 }, 00:09:30.234 "claimed": false, 00:09:30.234 "zoned": false, 00:09:30.234 "supported_io_types": { 00:09:30.234 "read": true, 00:09:30.234 "write": true, 00:09:30.234 "unmap": true, 00:09:30.234 "flush": true, 00:09:30.234 "reset": true, 00:09:30.234 "nvme_admin": false, 00:09:30.234 "nvme_io": false, 00:09:30.234 "nvme_io_md": false, 00:09:30.234 "write_zeroes": true, 00:09:30.234 "zcopy": true, 00:09:30.234 "get_zone_info": false, 00:09:30.234 "zone_management": false, 00:09:30.234 "zone_append": false, 00:09:30.234 "compare": false, 00:09:30.234 "compare_and_write": false, 00:09:30.234 "abort": true, 00:09:30.234 "seek_hole": false, 00:09:30.234 "seek_data": false, 00:09:30.234 "copy": true, 00:09:30.234 "nvme_iov_md": false 00:09:30.234 }, 00:09:30.234 "memory_domains": [ 00:09:30.234 { 00:09:30.234 "dma_device_id": "system", 00:09:30.234 "dma_device_type": 1 00:09:30.234 }, 00:09:30.234 { 00:09:30.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.234 "dma_device_type": 2 00:09:30.234 } 00:09:30.234 ], 00:09:30.234 "driver_specific": {} 00:09:30.234 } 00:09:30.234 ] 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.234 BaseBdev3 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.234 16:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.495 [ 00:09:30.495 { 00:09:30.495 "name": "BaseBdev3", 00:09:30.495 "aliases": [ 00:09:30.495 "92db74c2-dcaa-49c1-aa91-1837760e7aff" 00:09:30.495 ], 00:09:30.495 "product_name": "Malloc disk", 00:09:30.495 "block_size": 512, 00:09:30.495 "num_blocks": 65536, 00:09:30.495 "uuid": "92db74c2-dcaa-49c1-aa91-1837760e7aff", 00:09:30.495 "assigned_rate_limits": { 00:09:30.495 "rw_ios_per_sec": 0, 00:09:30.495 "rw_mbytes_per_sec": 0, 00:09:30.495 "r_mbytes_per_sec": 0, 00:09:30.495 "w_mbytes_per_sec": 0 00:09:30.495 }, 00:09:30.495 "claimed": false, 00:09:30.495 "zoned": false, 00:09:30.495 "supported_io_types": { 00:09:30.495 "read": true, 00:09:30.495 "write": true, 00:09:30.495 "unmap": true, 00:09:30.495 "flush": true, 00:09:30.495 "reset": true, 00:09:30.495 "nvme_admin": false, 00:09:30.495 "nvme_io": false, 00:09:30.495 "nvme_io_md": false, 00:09:30.495 "write_zeroes": true, 00:09:30.495 "zcopy": true, 00:09:30.495 "get_zone_info": false, 00:09:30.495 "zone_management": false, 00:09:30.495 "zone_append": false, 00:09:30.495 "compare": false, 00:09:30.495 "compare_and_write": false, 00:09:30.495 "abort": true, 00:09:30.495 "seek_hole": false, 00:09:30.495 "seek_data": false, 00:09:30.495 "copy": true, 00:09:30.495 "nvme_iov_md": false 00:09:30.495 }, 00:09:30.495 "memory_domains": [ 00:09:30.495 { 00:09:30.495 "dma_device_id": "system", 00:09:30.495 "dma_device_type": 1 00:09:30.495 }, 00:09:30.495 { 00:09:30.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.495 "dma_device_type": 2 00:09:30.495 } 00:09:30.495 ], 00:09:30.495 "driver_specific": {} 00:09:30.495 } 00:09:30.495 ] 00:09:30.495 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.495 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:30.495 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:30.495 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:30.495 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:30.495 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.495 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.495 BaseBdev4 00:09:30.495 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.495 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:30.495 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:30.495 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:30.495 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:30.495 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:30.495 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:30.495 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.496 [ 00:09:30.496 { 00:09:30.496 "name": "BaseBdev4", 00:09:30.496 "aliases": [ 00:09:30.496 "895a8fed-aa1e-40f2-ad90-3ec4c9331288" 00:09:30.496 ], 00:09:30.496 "product_name": "Malloc disk", 00:09:30.496 "block_size": 512, 00:09:30.496 "num_blocks": 65536, 00:09:30.496 "uuid": "895a8fed-aa1e-40f2-ad90-3ec4c9331288", 00:09:30.496 "assigned_rate_limits": { 00:09:30.496 "rw_ios_per_sec": 0, 00:09:30.496 "rw_mbytes_per_sec": 0, 00:09:30.496 "r_mbytes_per_sec": 0, 00:09:30.496 "w_mbytes_per_sec": 0 00:09:30.496 }, 00:09:30.496 "claimed": false, 00:09:30.496 "zoned": false, 00:09:30.496 "supported_io_types": { 00:09:30.496 "read": true, 00:09:30.496 "write": true, 00:09:30.496 "unmap": true, 00:09:30.496 "flush": true, 00:09:30.496 "reset": true, 00:09:30.496 "nvme_admin": false, 00:09:30.496 "nvme_io": false, 00:09:30.496 "nvme_io_md": false, 00:09:30.496 "write_zeroes": true, 00:09:30.496 "zcopy": true, 00:09:30.496 "get_zone_info": false, 00:09:30.496 "zone_management": false, 00:09:30.496 "zone_append": false, 00:09:30.496 "compare": false, 00:09:30.496 "compare_and_write": false, 00:09:30.496 "abort": true, 00:09:30.496 "seek_hole": false, 00:09:30.496 "seek_data": false, 00:09:30.496 "copy": true, 00:09:30.496 "nvme_iov_md": false 00:09:30.496 }, 00:09:30.496 "memory_domains": [ 00:09:30.496 { 00:09:30.496 "dma_device_id": "system", 00:09:30.496 "dma_device_type": 1 00:09:30.496 }, 00:09:30.496 { 00:09:30.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.496 "dma_device_type": 2 00:09:30.496 } 00:09:30.496 ], 00:09:30.496 "driver_specific": {} 00:09:30.496 } 00:09:30.496 ] 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.496 [2024-11-28 16:22:22.079532] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:30.496 [2024-11-28 16:22:22.079651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:30.496 [2024-11-28 16:22:22.079732] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.496 [2024-11-28 16:22:22.081498] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:30.496 [2024-11-28 16:22:22.081587] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.496 "name": "Existed_Raid", 00:09:30.496 "uuid": "3fff87f8-d93a-4108-ac2b-3b4b28554b32", 00:09:30.496 "strip_size_kb": 64, 00:09:30.496 "state": "configuring", 00:09:30.496 "raid_level": "raid0", 00:09:30.496 "superblock": true, 00:09:30.496 "num_base_bdevs": 4, 00:09:30.496 "num_base_bdevs_discovered": 3, 00:09:30.496 "num_base_bdevs_operational": 4, 00:09:30.496 "base_bdevs_list": [ 00:09:30.496 { 00:09:30.496 "name": "BaseBdev1", 00:09:30.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.496 "is_configured": false, 00:09:30.496 "data_offset": 0, 00:09:30.496 "data_size": 0 00:09:30.496 }, 00:09:30.496 { 00:09:30.496 "name": "BaseBdev2", 00:09:30.496 "uuid": "4afacd03-27a3-45e7-9536-e5ac6eec6ec6", 00:09:30.496 "is_configured": true, 00:09:30.496 "data_offset": 2048, 00:09:30.496 "data_size": 63488 00:09:30.496 }, 00:09:30.496 { 00:09:30.496 "name": "BaseBdev3", 00:09:30.496 "uuid": "92db74c2-dcaa-49c1-aa91-1837760e7aff", 00:09:30.496 "is_configured": true, 00:09:30.496 "data_offset": 2048, 00:09:30.496 "data_size": 63488 00:09:30.496 }, 00:09:30.496 { 00:09:30.496 "name": "BaseBdev4", 00:09:30.496 "uuid": "895a8fed-aa1e-40f2-ad90-3ec4c9331288", 00:09:30.496 "is_configured": true, 00:09:30.496 "data_offset": 2048, 00:09:30.496 "data_size": 63488 00:09:30.496 } 00:09:30.496 ] 00:09:30.496 }' 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.496 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.066 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:31.066 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.066 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.066 [2024-11-28 16:22:22.542776] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:31.066 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.066 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:31.066 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.066 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.066 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.066 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.066 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:31.066 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.066 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.066 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.066 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.066 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.066 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.066 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.066 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.066 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.066 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.066 "name": "Existed_Raid", 00:09:31.066 "uuid": "3fff87f8-d93a-4108-ac2b-3b4b28554b32", 00:09:31.066 "strip_size_kb": 64, 00:09:31.066 "state": "configuring", 00:09:31.066 "raid_level": "raid0", 00:09:31.066 "superblock": true, 00:09:31.066 "num_base_bdevs": 4, 00:09:31.067 "num_base_bdevs_discovered": 2, 00:09:31.067 "num_base_bdevs_operational": 4, 00:09:31.067 "base_bdevs_list": [ 00:09:31.067 { 00:09:31.067 "name": "BaseBdev1", 00:09:31.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.067 "is_configured": false, 00:09:31.067 "data_offset": 0, 00:09:31.067 "data_size": 0 00:09:31.067 }, 00:09:31.067 { 00:09:31.067 "name": null, 00:09:31.067 "uuid": "4afacd03-27a3-45e7-9536-e5ac6eec6ec6", 00:09:31.067 "is_configured": false, 00:09:31.067 "data_offset": 0, 00:09:31.067 "data_size": 63488 00:09:31.067 }, 00:09:31.067 { 00:09:31.067 "name": "BaseBdev3", 00:09:31.067 "uuid": "92db74c2-dcaa-49c1-aa91-1837760e7aff", 00:09:31.067 "is_configured": true, 00:09:31.067 "data_offset": 2048, 00:09:31.067 "data_size": 63488 00:09:31.067 }, 00:09:31.067 { 00:09:31.067 "name": "BaseBdev4", 00:09:31.067 "uuid": "895a8fed-aa1e-40f2-ad90-3ec4c9331288", 00:09:31.067 "is_configured": true, 00:09:31.067 "data_offset": 2048, 00:09:31.067 "data_size": 63488 00:09:31.067 } 00:09:31.067 ] 00:09:31.067 }' 00:09:31.067 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.067 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.327 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.327 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.327 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.328 16:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:31.328 16:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.328 [2024-11-28 16:22:23.032823] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.328 BaseBdev1 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.328 [ 00:09:31.328 { 00:09:31.328 "name": "BaseBdev1", 00:09:31.328 "aliases": [ 00:09:31.328 "295e2ef4-eb46-4d22-a50d-0b0d76ac6981" 00:09:31.328 ], 00:09:31.328 "product_name": "Malloc disk", 00:09:31.328 "block_size": 512, 00:09:31.328 "num_blocks": 65536, 00:09:31.328 "uuid": "295e2ef4-eb46-4d22-a50d-0b0d76ac6981", 00:09:31.328 "assigned_rate_limits": { 00:09:31.328 "rw_ios_per_sec": 0, 00:09:31.328 "rw_mbytes_per_sec": 0, 00:09:31.328 "r_mbytes_per_sec": 0, 00:09:31.328 "w_mbytes_per_sec": 0 00:09:31.328 }, 00:09:31.328 "claimed": true, 00:09:31.328 "claim_type": "exclusive_write", 00:09:31.328 "zoned": false, 00:09:31.328 "supported_io_types": { 00:09:31.328 "read": true, 00:09:31.328 "write": true, 00:09:31.328 "unmap": true, 00:09:31.328 "flush": true, 00:09:31.328 "reset": true, 00:09:31.328 "nvme_admin": false, 00:09:31.328 "nvme_io": false, 00:09:31.328 "nvme_io_md": false, 00:09:31.328 "write_zeroes": true, 00:09:31.328 "zcopy": true, 00:09:31.328 "get_zone_info": false, 00:09:31.328 "zone_management": false, 00:09:31.328 "zone_append": false, 00:09:31.328 "compare": false, 00:09:31.328 "compare_and_write": false, 00:09:31.328 "abort": true, 00:09:31.328 "seek_hole": false, 00:09:31.328 "seek_data": false, 00:09:31.328 "copy": true, 00:09:31.328 "nvme_iov_md": false 00:09:31.328 }, 00:09:31.328 "memory_domains": [ 00:09:31.328 { 00:09:31.328 "dma_device_id": "system", 00:09:31.328 "dma_device_type": 1 00:09:31.328 }, 00:09:31.328 { 00:09:31.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.328 "dma_device_type": 2 00:09:31.328 } 00:09:31.328 ], 00:09:31.328 "driver_specific": {} 00:09:31.328 } 00:09:31.328 ] 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.328 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.588 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.588 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.588 "name": "Existed_Raid", 00:09:31.588 "uuid": "3fff87f8-d93a-4108-ac2b-3b4b28554b32", 00:09:31.588 "strip_size_kb": 64, 00:09:31.588 "state": "configuring", 00:09:31.588 "raid_level": "raid0", 00:09:31.588 "superblock": true, 00:09:31.588 "num_base_bdevs": 4, 00:09:31.588 "num_base_bdevs_discovered": 3, 00:09:31.588 "num_base_bdevs_operational": 4, 00:09:31.588 "base_bdevs_list": [ 00:09:31.588 { 00:09:31.588 "name": "BaseBdev1", 00:09:31.588 "uuid": "295e2ef4-eb46-4d22-a50d-0b0d76ac6981", 00:09:31.588 "is_configured": true, 00:09:31.588 "data_offset": 2048, 00:09:31.588 "data_size": 63488 00:09:31.588 }, 00:09:31.588 { 00:09:31.588 "name": null, 00:09:31.588 "uuid": "4afacd03-27a3-45e7-9536-e5ac6eec6ec6", 00:09:31.588 "is_configured": false, 00:09:31.588 "data_offset": 0, 00:09:31.588 "data_size": 63488 00:09:31.588 }, 00:09:31.588 { 00:09:31.588 "name": "BaseBdev3", 00:09:31.588 "uuid": "92db74c2-dcaa-49c1-aa91-1837760e7aff", 00:09:31.588 "is_configured": true, 00:09:31.588 "data_offset": 2048, 00:09:31.588 "data_size": 63488 00:09:31.588 }, 00:09:31.588 { 00:09:31.588 "name": "BaseBdev4", 00:09:31.588 "uuid": "895a8fed-aa1e-40f2-ad90-3ec4c9331288", 00:09:31.588 "is_configured": true, 00:09:31.588 "data_offset": 2048, 00:09:31.588 "data_size": 63488 00:09:31.588 } 00:09:31.588 ] 00:09:31.588 }' 00:09:31.588 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.588 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.848 [2024-11-28 16:22:23.567949] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.848 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.108 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.108 "name": "Existed_Raid", 00:09:32.108 "uuid": "3fff87f8-d93a-4108-ac2b-3b4b28554b32", 00:09:32.108 "strip_size_kb": 64, 00:09:32.108 "state": "configuring", 00:09:32.108 "raid_level": "raid0", 00:09:32.108 "superblock": true, 00:09:32.108 "num_base_bdevs": 4, 00:09:32.108 "num_base_bdevs_discovered": 2, 00:09:32.108 "num_base_bdevs_operational": 4, 00:09:32.108 "base_bdevs_list": [ 00:09:32.108 { 00:09:32.108 "name": "BaseBdev1", 00:09:32.108 "uuid": "295e2ef4-eb46-4d22-a50d-0b0d76ac6981", 00:09:32.108 "is_configured": true, 00:09:32.108 "data_offset": 2048, 00:09:32.108 "data_size": 63488 00:09:32.108 }, 00:09:32.108 { 00:09:32.108 "name": null, 00:09:32.108 "uuid": "4afacd03-27a3-45e7-9536-e5ac6eec6ec6", 00:09:32.108 "is_configured": false, 00:09:32.108 "data_offset": 0, 00:09:32.108 "data_size": 63488 00:09:32.108 }, 00:09:32.108 { 00:09:32.108 "name": null, 00:09:32.108 "uuid": "92db74c2-dcaa-49c1-aa91-1837760e7aff", 00:09:32.108 "is_configured": false, 00:09:32.108 "data_offset": 0, 00:09:32.108 "data_size": 63488 00:09:32.108 }, 00:09:32.108 { 00:09:32.108 "name": "BaseBdev4", 00:09:32.108 "uuid": "895a8fed-aa1e-40f2-ad90-3ec4c9331288", 00:09:32.108 "is_configured": true, 00:09:32.108 "data_offset": 2048, 00:09:32.108 "data_size": 63488 00:09:32.108 } 00:09:32.108 ] 00:09:32.108 }' 00:09:32.108 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.108 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.369 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:32.369 16:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.369 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.369 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.369 16:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.369 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:32.369 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:32.369 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.369 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.369 [2024-11-28 16:22:24.015227] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:32.369 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.369 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:32.369 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.369 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.369 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.369 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.369 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:32.369 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.369 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.369 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.369 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.369 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.369 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.369 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.369 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.369 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.369 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.369 "name": "Existed_Raid", 00:09:32.369 "uuid": "3fff87f8-d93a-4108-ac2b-3b4b28554b32", 00:09:32.369 "strip_size_kb": 64, 00:09:32.369 "state": "configuring", 00:09:32.369 "raid_level": "raid0", 00:09:32.369 "superblock": true, 00:09:32.369 "num_base_bdevs": 4, 00:09:32.369 "num_base_bdevs_discovered": 3, 00:09:32.369 "num_base_bdevs_operational": 4, 00:09:32.369 "base_bdevs_list": [ 00:09:32.369 { 00:09:32.369 "name": "BaseBdev1", 00:09:32.369 "uuid": "295e2ef4-eb46-4d22-a50d-0b0d76ac6981", 00:09:32.369 "is_configured": true, 00:09:32.369 "data_offset": 2048, 00:09:32.369 "data_size": 63488 00:09:32.369 }, 00:09:32.369 { 00:09:32.369 "name": null, 00:09:32.369 "uuid": "4afacd03-27a3-45e7-9536-e5ac6eec6ec6", 00:09:32.369 "is_configured": false, 00:09:32.369 "data_offset": 0, 00:09:32.369 "data_size": 63488 00:09:32.369 }, 00:09:32.369 { 00:09:32.369 "name": "BaseBdev3", 00:09:32.369 "uuid": "92db74c2-dcaa-49c1-aa91-1837760e7aff", 00:09:32.369 "is_configured": true, 00:09:32.369 "data_offset": 2048, 00:09:32.369 "data_size": 63488 00:09:32.369 }, 00:09:32.369 { 00:09:32.369 "name": "BaseBdev4", 00:09:32.369 "uuid": "895a8fed-aa1e-40f2-ad90-3ec4c9331288", 00:09:32.369 "is_configured": true, 00:09:32.369 "data_offset": 2048, 00:09:32.369 "data_size": 63488 00:09:32.369 } 00:09:32.369 ] 00:09:32.369 }' 00:09:32.369 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.369 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.939 [2024-11-28 16:22:24.462467] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.939 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.939 "name": "Existed_Raid", 00:09:32.939 "uuid": "3fff87f8-d93a-4108-ac2b-3b4b28554b32", 00:09:32.939 "strip_size_kb": 64, 00:09:32.939 "state": "configuring", 00:09:32.939 "raid_level": "raid0", 00:09:32.939 "superblock": true, 00:09:32.939 "num_base_bdevs": 4, 00:09:32.939 "num_base_bdevs_discovered": 2, 00:09:32.939 "num_base_bdevs_operational": 4, 00:09:32.939 "base_bdevs_list": [ 00:09:32.939 { 00:09:32.939 "name": null, 00:09:32.939 "uuid": "295e2ef4-eb46-4d22-a50d-0b0d76ac6981", 00:09:32.939 "is_configured": false, 00:09:32.939 "data_offset": 0, 00:09:32.939 "data_size": 63488 00:09:32.940 }, 00:09:32.940 { 00:09:32.940 "name": null, 00:09:32.940 "uuid": "4afacd03-27a3-45e7-9536-e5ac6eec6ec6", 00:09:32.940 "is_configured": false, 00:09:32.940 "data_offset": 0, 00:09:32.940 "data_size": 63488 00:09:32.940 }, 00:09:32.940 { 00:09:32.940 "name": "BaseBdev3", 00:09:32.940 "uuid": "92db74c2-dcaa-49c1-aa91-1837760e7aff", 00:09:32.940 "is_configured": true, 00:09:32.940 "data_offset": 2048, 00:09:32.940 "data_size": 63488 00:09:32.940 }, 00:09:32.940 { 00:09:32.940 "name": "BaseBdev4", 00:09:32.940 "uuid": "895a8fed-aa1e-40f2-ad90-3ec4c9331288", 00:09:32.940 "is_configured": true, 00:09:32.940 "data_offset": 2048, 00:09:32.940 "data_size": 63488 00:09:32.940 } 00:09:32.940 ] 00:09:32.940 }' 00:09:32.940 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.940 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.201 [2024-11-28 16:22:24.936016] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.201 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.466 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.466 "name": "Existed_Raid", 00:09:33.466 "uuid": "3fff87f8-d93a-4108-ac2b-3b4b28554b32", 00:09:33.466 "strip_size_kb": 64, 00:09:33.466 "state": "configuring", 00:09:33.466 "raid_level": "raid0", 00:09:33.466 "superblock": true, 00:09:33.466 "num_base_bdevs": 4, 00:09:33.466 "num_base_bdevs_discovered": 3, 00:09:33.466 "num_base_bdevs_operational": 4, 00:09:33.466 "base_bdevs_list": [ 00:09:33.466 { 00:09:33.466 "name": null, 00:09:33.466 "uuid": "295e2ef4-eb46-4d22-a50d-0b0d76ac6981", 00:09:33.466 "is_configured": false, 00:09:33.466 "data_offset": 0, 00:09:33.466 "data_size": 63488 00:09:33.466 }, 00:09:33.466 { 00:09:33.466 "name": "BaseBdev2", 00:09:33.466 "uuid": "4afacd03-27a3-45e7-9536-e5ac6eec6ec6", 00:09:33.466 "is_configured": true, 00:09:33.466 "data_offset": 2048, 00:09:33.466 "data_size": 63488 00:09:33.466 }, 00:09:33.466 { 00:09:33.466 "name": "BaseBdev3", 00:09:33.466 "uuid": "92db74c2-dcaa-49c1-aa91-1837760e7aff", 00:09:33.466 "is_configured": true, 00:09:33.466 "data_offset": 2048, 00:09:33.466 "data_size": 63488 00:09:33.466 }, 00:09:33.466 { 00:09:33.466 "name": "BaseBdev4", 00:09:33.466 "uuid": "895a8fed-aa1e-40f2-ad90-3ec4c9331288", 00:09:33.466 "is_configured": true, 00:09:33.466 "data_offset": 2048, 00:09:33.466 "data_size": 63488 00:09:33.466 } 00:09:33.466 ] 00:09:33.466 }' 00:09:33.466 16:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.466 16:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 295e2ef4-eb46-4d22-a50d-0b0d76ac6981 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.725 [2024-11-28 16:22:25.406005] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:33.725 [2024-11-28 16:22:25.406260] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:33.725 [2024-11-28 16:22:25.406309] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:33.725 [2024-11-28 16:22:25.406575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:33.725 [2024-11-28 16:22:25.406727] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:33.725 NewBaseBdev 00:09:33.725 [2024-11-28 16:22:25.406778] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:33.725 [2024-11-28 16:22:25.406920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.725 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.725 [ 00:09:33.725 { 00:09:33.725 "name": "NewBaseBdev", 00:09:33.725 "aliases": [ 00:09:33.725 "295e2ef4-eb46-4d22-a50d-0b0d76ac6981" 00:09:33.725 ], 00:09:33.725 "product_name": "Malloc disk", 00:09:33.725 "block_size": 512, 00:09:33.726 "num_blocks": 65536, 00:09:33.726 "uuid": "295e2ef4-eb46-4d22-a50d-0b0d76ac6981", 00:09:33.726 "assigned_rate_limits": { 00:09:33.726 "rw_ios_per_sec": 0, 00:09:33.726 "rw_mbytes_per_sec": 0, 00:09:33.726 "r_mbytes_per_sec": 0, 00:09:33.726 "w_mbytes_per_sec": 0 00:09:33.726 }, 00:09:33.726 "claimed": true, 00:09:33.726 "claim_type": "exclusive_write", 00:09:33.726 "zoned": false, 00:09:33.726 "supported_io_types": { 00:09:33.726 "read": true, 00:09:33.726 "write": true, 00:09:33.726 "unmap": true, 00:09:33.726 "flush": true, 00:09:33.726 "reset": true, 00:09:33.726 "nvme_admin": false, 00:09:33.726 "nvme_io": false, 00:09:33.726 "nvme_io_md": false, 00:09:33.726 "write_zeroes": true, 00:09:33.726 "zcopy": true, 00:09:33.726 "get_zone_info": false, 00:09:33.726 "zone_management": false, 00:09:33.726 "zone_append": false, 00:09:33.726 "compare": false, 00:09:33.726 "compare_and_write": false, 00:09:33.726 "abort": true, 00:09:33.726 "seek_hole": false, 00:09:33.726 "seek_data": false, 00:09:33.726 "copy": true, 00:09:33.726 "nvme_iov_md": false 00:09:33.726 }, 00:09:33.726 "memory_domains": [ 00:09:33.726 { 00:09:33.726 "dma_device_id": "system", 00:09:33.726 "dma_device_type": 1 00:09:33.726 }, 00:09:33.726 { 00:09:33.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.726 "dma_device_type": 2 00:09:33.726 } 00:09:33.726 ], 00:09:33.726 "driver_specific": {} 00:09:33.726 } 00:09:33.726 ] 00:09:33.726 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.726 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:33.726 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:33.726 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.726 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.726 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.726 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.726 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.726 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.726 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.726 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.726 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.726 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.726 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.726 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.726 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.726 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.983 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.983 "name": "Existed_Raid", 00:09:33.983 "uuid": "3fff87f8-d93a-4108-ac2b-3b4b28554b32", 00:09:33.983 "strip_size_kb": 64, 00:09:33.983 "state": "online", 00:09:33.983 "raid_level": "raid0", 00:09:33.983 "superblock": true, 00:09:33.983 "num_base_bdevs": 4, 00:09:33.983 "num_base_bdevs_discovered": 4, 00:09:33.983 "num_base_bdevs_operational": 4, 00:09:33.983 "base_bdevs_list": [ 00:09:33.983 { 00:09:33.983 "name": "NewBaseBdev", 00:09:33.983 "uuid": "295e2ef4-eb46-4d22-a50d-0b0d76ac6981", 00:09:33.983 "is_configured": true, 00:09:33.983 "data_offset": 2048, 00:09:33.983 "data_size": 63488 00:09:33.983 }, 00:09:33.983 { 00:09:33.983 "name": "BaseBdev2", 00:09:33.983 "uuid": "4afacd03-27a3-45e7-9536-e5ac6eec6ec6", 00:09:33.983 "is_configured": true, 00:09:33.983 "data_offset": 2048, 00:09:33.983 "data_size": 63488 00:09:33.983 }, 00:09:33.983 { 00:09:33.983 "name": "BaseBdev3", 00:09:33.983 "uuid": "92db74c2-dcaa-49c1-aa91-1837760e7aff", 00:09:33.983 "is_configured": true, 00:09:33.983 "data_offset": 2048, 00:09:33.983 "data_size": 63488 00:09:33.983 }, 00:09:33.983 { 00:09:33.983 "name": "BaseBdev4", 00:09:33.983 "uuid": "895a8fed-aa1e-40f2-ad90-3ec4c9331288", 00:09:33.983 "is_configured": true, 00:09:33.983 "data_offset": 2048, 00:09:33.983 "data_size": 63488 00:09:33.983 } 00:09:33.983 ] 00:09:33.983 }' 00:09:33.983 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.983 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.241 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:34.241 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:34.241 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:34.241 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:34.241 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:34.241 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:34.241 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:34.241 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:34.241 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.241 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.241 [2024-11-28 16:22:25.845642] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.241 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.241 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:34.241 "name": "Existed_Raid", 00:09:34.241 "aliases": [ 00:09:34.241 "3fff87f8-d93a-4108-ac2b-3b4b28554b32" 00:09:34.241 ], 00:09:34.241 "product_name": "Raid Volume", 00:09:34.241 "block_size": 512, 00:09:34.241 "num_blocks": 253952, 00:09:34.241 "uuid": "3fff87f8-d93a-4108-ac2b-3b4b28554b32", 00:09:34.241 "assigned_rate_limits": { 00:09:34.241 "rw_ios_per_sec": 0, 00:09:34.241 "rw_mbytes_per_sec": 0, 00:09:34.241 "r_mbytes_per_sec": 0, 00:09:34.241 "w_mbytes_per_sec": 0 00:09:34.241 }, 00:09:34.241 "claimed": false, 00:09:34.241 "zoned": false, 00:09:34.241 "supported_io_types": { 00:09:34.241 "read": true, 00:09:34.241 "write": true, 00:09:34.241 "unmap": true, 00:09:34.241 "flush": true, 00:09:34.241 "reset": true, 00:09:34.241 "nvme_admin": false, 00:09:34.241 "nvme_io": false, 00:09:34.241 "nvme_io_md": false, 00:09:34.241 "write_zeroes": true, 00:09:34.241 "zcopy": false, 00:09:34.242 "get_zone_info": false, 00:09:34.242 "zone_management": false, 00:09:34.242 "zone_append": false, 00:09:34.242 "compare": false, 00:09:34.242 "compare_and_write": false, 00:09:34.242 "abort": false, 00:09:34.242 "seek_hole": false, 00:09:34.242 "seek_data": false, 00:09:34.242 "copy": false, 00:09:34.242 "nvme_iov_md": false 00:09:34.242 }, 00:09:34.242 "memory_domains": [ 00:09:34.242 { 00:09:34.242 "dma_device_id": "system", 00:09:34.242 "dma_device_type": 1 00:09:34.242 }, 00:09:34.242 { 00:09:34.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.242 "dma_device_type": 2 00:09:34.242 }, 00:09:34.242 { 00:09:34.242 "dma_device_id": "system", 00:09:34.242 "dma_device_type": 1 00:09:34.242 }, 00:09:34.242 { 00:09:34.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.242 "dma_device_type": 2 00:09:34.242 }, 00:09:34.242 { 00:09:34.242 "dma_device_id": "system", 00:09:34.242 "dma_device_type": 1 00:09:34.242 }, 00:09:34.242 { 00:09:34.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.242 "dma_device_type": 2 00:09:34.242 }, 00:09:34.242 { 00:09:34.242 "dma_device_id": "system", 00:09:34.242 "dma_device_type": 1 00:09:34.242 }, 00:09:34.242 { 00:09:34.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.242 "dma_device_type": 2 00:09:34.242 } 00:09:34.242 ], 00:09:34.242 "driver_specific": { 00:09:34.242 "raid": { 00:09:34.242 "uuid": "3fff87f8-d93a-4108-ac2b-3b4b28554b32", 00:09:34.242 "strip_size_kb": 64, 00:09:34.242 "state": "online", 00:09:34.242 "raid_level": "raid0", 00:09:34.242 "superblock": true, 00:09:34.242 "num_base_bdevs": 4, 00:09:34.242 "num_base_bdevs_discovered": 4, 00:09:34.242 "num_base_bdevs_operational": 4, 00:09:34.242 "base_bdevs_list": [ 00:09:34.242 { 00:09:34.242 "name": "NewBaseBdev", 00:09:34.242 "uuid": "295e2ef4-eb46-4d22-a50d-0b0d76ac6981", 00:09:34.242 "is_configured": true, 00:09:34.242 "data_offset": 2048, 00:09:34.242 "data_size": 63488 00:09:34.242 }, 00:09:34.242 { 00:09:34.242 "name": "BaseBdev2", 00:09:34.242 "uuid": "4afacd03-27a3-45e7-9536-e5ac6eec6ec6", 00:09:34.242 "is_configured": true, 00:09:34.242 "data_offset": 2048, 00:09:34.242 "data_size": 63488 00:09:34.242 }, 00:09:34.242 { 00:09:34.242 "name": "BaseBdev3", 00:09:34.242 "uuid": "92db74c2-dcaa-49c1-aa91-1837760e7aff", 00:09:34.242 "is_configured": true, 00:09:34.242 "data_offset": 2048, 00:09:34.242 "data_size": 63488 00:09:34.242 }, 00:09:34.242 { 00:09:34.242 "name": "BaseBdev4", 00:09:34.242 "uuid": "895a8fed-aa1e-40f2-ad90-3ec4c9331288", 00:09:34.242 "is_configured": true, 00:09:34.242 "data_offset": 2048, 00:09:34.242 "data_size": 63488 00:09:34.242 } 00:09:34.242 ] 00:09:34.242 } 00:09:34.242 } 00:09:34.242 }' 00:09:34.242 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:34.242 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:34.242 BaseBdev2 00:09:34.242 BaseBdev3 00:09:34.242 BaseBdev4' 00:09:34.242 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.242 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:34.242 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.242 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.242 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:34.242 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.242 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.242 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.242 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.242 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.242 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.242 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.242 16:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:34.242 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.242 16:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.501 [2024-11-28 16:22:26.144749] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.501 [2024-11-28 16:22:26.144780] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.501 [2024-11-28 16:22:26.144860] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.501 [2024-11-28 16:22:26.144937] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:34.501 [2024-11-28 16:22:26.144979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81025 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 81025 ']' 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 81025 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81025 00:09:34.501 killing process with pid 81025 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81025' 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 81025 00:09:34.501 [2024-11-28 16:22:26.187021] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:34.501 16:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 81025 00:09:34.501 [2024-11-28 16:22:26.227279] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:34.761 ************************************ 00:09:34.761 END TEST raid_state_function_test_sb 00:09:34.761 ************************************ 00:09:34.761 16:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:34.761 00:09:34.761 real 0m9.413s 00:09:34.761 user 0m16.086s 00:09:34.761 sys 0m1.939s 00:09:34.761 16:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.761 16:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.761 16:22:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:09:34.761 16:22:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:34.761 16:22:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.761 16:22:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:35.022 ************************************ 00:09:35.022 START TEST raid_superblock_test 00:09:35.022 ************************************ 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81673 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81673 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81673 ']' 00:09:35.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.022 16:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.022 [2024-11-28 16:22:26.624161] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:35.022 [2024-11-28 16:22:26.624305] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81673 ] 00:09:35.022 [2024-11-28 16:22:26.767217] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.283 [2024-11-28 16:22:26.813791] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.283 [2024-11-28 16:22:26.854743] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:35.283 [2024-11-28 16:22:26.854781] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:35.872 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.872 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:35.872 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:35.872 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:35.872 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:35.872 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:35.872 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:35.872 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:35.872 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:35.872 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:35.872 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:35.872 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.872 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.872 malloc1 00:09:35.872 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.872 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:35.872 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.872 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.872 [2024-11-28 16:22:27.467980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:35.873 [2024-11-28 16:22:27.468100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.873 [2024-11-28 16:22:27.468146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:35.873 [2024-11-28 16:22:27.468194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.873 [2024-11-28 16:22:27.470245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.873 [2024-11-28 16:22:27.470316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:35.873 pt1 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.873 malloc2 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.873 [2024-11-28 16:22:27.508056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:35.873 [2024-11-28 16:22:27.508148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.873 [2024-11-28 16:22:27.508181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:35.873 [2024-11-28 16:22:27.508210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.873 [2024-11-28 16:22:27.510195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.873 [2024-11-28 16:22:27.510262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:35.873 pt2 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.873 malloc3 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.873 [2024-11-28 16:22:27.540316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:35.873 [2024-11-28 16:22:27.540408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.873 [2024-11-28 16:22:27.540447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:35.873 [2024-11-28 16:22:27.540479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.873 [2024-11-28 16:22:27.542534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.873 [2024-11-28 16:22:27.542604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:35.873 pt3 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.873 malloc4 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.873 [2024-11-28 16:22:27.572652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:35.873 [2024-11-28 16:22:27.572702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.873 [2024-11-28 16:22:27.572716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:35.873 [2024-11-28 16:22:27.572728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.873 [2024-11-28 16:22:27.574714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.873 [2024-11-28 16:22:27.574749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:35.873 pt4 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.873 [2024-11-28 16:22:27.584711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:35.873 [2024-11-28 16:22:27.586481] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:35.873 [2024-11-28 16:22:27.586580] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:35.873 [2024-11-28 16:22:27.586646] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:35.873 [2024-11-28 16:22:27.586798] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:35.873 [2024-11-28 16:22:27.586811] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:35.873 [2024-11-28 16:22:27.587054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:35.873 [2024-11-28 16:22:27.587186] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:35.873 [2024-11-28 16:22:27.587200] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:35.873 [2024-11-28 16:22:27.587330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.873 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.873 "name": "raid_bdev1", 00:09:35.873 "uuid": "6bfb83d2-c2d4-4999-870d-72c0f18b6f97", 00:09:35.873 "strip_size_kb": 64, 00:09:35.873 "state": "online", 00:09:35.873 "raid_level": "raid0", 00:09:35.873 "superblock": true, 00:09:35.873 "num_base_bdevs": 4, 00:09:35.873 "num_base_bdevs_discovered": 4, 00:09:35.873 "num_base_bdevs_operational": 4, 00:09:35.873 "base_bdevs_list": [ 00:09:35.873 { 00:09:35.873 "name": "pt1", 00:09:35.873 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:35.874 "is_configured": true, 00:09:35.874 "data_offset": 2048, 00:09:35.874 "data_size": 63488 00:09:35.874 }, 00:09:35.874 { 00:09:35.874 "name": "pt2", 00:09:35.874 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.874 "is_configured": true, 00:09:35.874 "data_offset": 2048, 00:09:35.874 "data_size": 63488 00:09:35.874 }, 00:09:35.874 { 00:09:35.874 "name": "pt3", 00:09:35.874 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:35.874 "is_configured": true, 00:09:35.874 "data_offset": 2048, 00:09:35.874 "data_size": 63488 00:09:35.874 }, 00:09:35.874 { 00:09:35.874 "name": "pt4", 00:09:35.874 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:35.874 "is_configured": true, 00:09:35.874 "data_offset": 2048, 00:09:35.874 "data_size": 63488 00:09:35.874 } 00:09:35.874 ] 00:09:35.874 }' 00:09:35.874 16:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.874 16:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.444 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:36.444 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:36.444 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:36.444 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:36.444 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:36.444 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:36.444 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:36.444 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.444 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.444 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:36.444 [2024-11-28 16:22:28.028237] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.444 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.444 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:36.444 "name": "raid_bdev1", 00:09:36.444 "aliases": [ 00:09:36.444 "6bfb83d2-c2d4-4999-870d-72c0f18b6f97" 00:09:36.444 ], 00:09:36.444 "product_name": "Raid Volume", 00:09:36.444 "block_size": 512, 00:09:36.444 "num_blocks": 253952, 00:09:36.444 "uuid": "6bfb83d2-c2d4-4999-870d-72c0f18b6f97", 00:09:36.444 "assigned_rate_limits": { 00:09:36.444 "rw_ios_per_sec": 0, 00:09:36.444 "rw_mbytes_per_sec": 0, 00:09:36.444 "r_mbytes_per_sec": 0, 00:09:36.444 "w_mbytes_per_sec": 0 00:09:36.444 }, 00:09:36.444 "claimed": false, 00:09:36.444 "zoned": false, 00:09:36.444 "supported_io_types": { 00:09:36.444 "read": true, 00:09:36.444 "write": true, 00:09:36.444 "unmap": true, 00:09:36.444 "flush": true, 00:09:36.444 "reset": true, 00:09:36.444 "nvme_admin": false, 00:09:36.444 "nvme_io": false, 00:09:36.444 "nvme_io_md": false, 00:09:36.444 "write_zeroes": true, 00:09:36.444 "zcopy": false, 00:09:36.444 "get_zone_info": false, 00:09:36.444 "zone_management": false, 00:09:36.444 "zone_append": false, 00:09:36.444 "compare": false, 00:09:36.444 "compare_and_write": false, 00:09:36.444 "abort": false, 00:09:36.444 "seek_hole": false, 00:09:36.444 "seek_data": false, 00:09:36.444 "copy": false, 00:09:36.444 "nvme_iov_md": false 00:09:36.444 }, 00:09:36.444 "memory_domains": [ 00:09:36.444 { 00:09:36.444 "dma_device_id": "system", 00:09:36.444 "dma_device_type": 1 00:09:36.444 }, 00:09:36.444 { 00:09:36.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.444 "dma_device_type": 2 00:09:36.444 }, 00:09:36.444 { 00:09:36.444 "dma_device_id": "system", 00:09:36.444 "dma_device_type": 1 00:09:36.444 }, 00:09:36.444 { 00:09:36.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.444 "dma_device_type": 2 00:09:36.444 }, 00:09:36.444 { 00:09:36.444 "dma_device_id": "system", 00:09:36.444 "dma_device_type": 1 00:09:36.444 }, 00:09:36.444 { 00:09:36.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.444 "dma_device_type": 2 00:09:36.444 }, 00:09:36.444 { 00:09:36.444 "dma_device_id": "system", 00:09:36.444 "dma_device_type": 1 00:09:36.444 }, 00:09:36.444 { 00:09:36.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.444 "dma_device_type": 2 00:09:36.444 } 00:09:36.444 ], 00:09:36.444 "driver_specific": { 00:09:36.444 "raid": { 00:09:36.444 "uuid": "6bfb83d2-c2d4-4999-870d-72c0f18b6f97", 00:09:36.444 "strip_size_kb": 64, 00:09:36.444 "state": "online", 00:09:36.444 "raid_level": "raid0", 00:09:36.444 "superblock": true, 00:09:36.444 "num_base_bdevs": 4, 00:09:36.444 "num_base_bdevs_discovered": 4, 00:09:36.444 "num_base_bdevs_operational": 4, 00:09:36.444 "base_bdevs_list": [ 00:09:36.444 { 00:09:36.444 "name": "pt1", 00:09:36.444 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:36.444 "is_configured": true, 00:09:36.444 "data_offset": 2048, 00:09:36.444 "data_size": 63488 00:09:36.444 }, 00:09:36.444 { 00:09:36.444 "name": "pt2", 00:09:36.444 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:36.444 "is_configured": true, 00:09:36.444 "data_offset": 2048, 00:09:36.444 "data_size": 63488 00:09:36.444 }, 00:09:36.444 { 00:09:36.444 "name": "pt3", 00:09:36.444 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:36.444 "is_configured": true, 00:09:36.444 "data_offset": 2048, 00:09:36.444 "data_size": 63488 00:09:36.444 }, 00:09:36.444 { 00:09:36.444 "name": "pt4", 00:09:36.444 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:36.444 "is_configured": true, 00:09:36.444 "data_offset": 2048, 00:09:36.444 "data_size": 63488 00:09:36.444 } 00:09:36.444 ] 00:09:36.444 } 00:09:36.444 } 00:09:36.444 }' 00:09:36.444 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:36.444 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:36.444 pt2 00:09:36.444 pt3 00:09:36.444 pt4' 00:09:36.444 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.444 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:36.445 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.445 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.445 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:36.445 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.445 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.445 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.445 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.445 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.445 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.445 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:36.445 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.445 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.445 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.445 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.704 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.704 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.704 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.704 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.704 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:36.704 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.704 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.705 [2024-11-28 16:22:28.347643] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6bfb83d2-c2d4-4999-870d-72c0f18b6f97 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6bfb83d2-c2d4-4999-870d-72c0f18b6f97 ']' 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.705 [2024-11-28 16:22:28.391316] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:36.705 [2024-11-28 16:22:28.391396] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:36.705 [2024-11-28 16:22:28.391478] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.705 [2024-11-28 16:22:28.391559] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.705 [2024-11-28 16:22:28.391571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.705 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.965 [2024-11-28 16:22:28.551073] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:36.965 [2024-11-28 16:22:28.552936] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:36.965 [2024-11-28 16:22:28.553048] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:36.965 [2024-11-28 16:22:28.553081] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:36.965 [2024-11-28 16:22:28.553127] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:36.965 [2024-11-28 16:22:28.553174] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:36.965 [2024-11-28 16:22:28.553194] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:36.965 [2024-11-28 16:22:28.553210] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:36.965 [2024-11-28 16:22:28.553224] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:36.965 [2024-11-28 16:22:28.553234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:36.965 request: 00:09:36.965 { 00:09:36.965 "name": "raid_bdev1", 00:09:36.965 "raid_level": "raid0", 00:09:36.965 "base_bdevs": [ 00:09:36.965 "malloc1", 00:09:36.965 "malloc2", 00:09:36.965 "malloc3", 00:09:36.965 "malloc4" 00:09:36.965 ], 00:09:36.965 "strip_size_kb": 64, 00:09:36.965 "superblock": false, 00:09:36.965 "method": "bdev_raid_create", 00:09:36.965 "req_id": 1 00:09:36.965 } 00:09:36.965 Got JSON-RPC error response 00:09:36.965 response: 00:09:36.965 { 00:09:36.965 "code": -17, 00:09:36.965 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:36.965 } 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.965 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.965 [2024-11-28 16:22:28.618912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:36.965 [2024-11-28 16:22:28.619005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.966 [2024-11-28 16:22:28.619043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:36.966 [2024-11-28 16:22:28.619071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.966 [2024-11-28 16:22:28.621165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.966 [2024-11-28 16:22:28.621234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:36.966 [2024-11-28 16:22:28.621331] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:36.966 [2024-11-28 16:22:28.621404] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:36.966 pt1 00:09:36.966 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.966 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:36.966 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:36.966 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.966 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.966 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.966 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:36.966 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.966 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.966 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.966 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.966 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.966 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.966 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.966 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.966 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.966 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.966 "name": "raid_bdev1", 00:09:36.966 "uuid": "6bfb83d2-c2d4-4999-870d-72c0f18b6f97", 00:09:36.966 "strip_size_kb": 64, 00:09:36.966 "state": "configuring", 00:09:36.966 "raid_level": "raid0", 00:09:36.966 "superblock": true, 00:09:36.966 "num_base_bdevs": 4, 00:09:36.966 "num_base_bdevs_discovered": 1, 00:09:36.966 "num_base_bdevs_operational": 4, 00:09:36.966 "base_bdevs_list": [ 00:09:36.966 { 00:09:36.966 "name": "pt1", 00:09:36.966 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:36.966 "is_configured": true, 00:09:36.966 "data_offset": 2048, 00:09:36.966 "data_size": 63488 00:09:36.966 }, 00:09:36.966 { 00:09:36.966 "name": null, 00:09:36.966 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:36.966 "is_configured": false, 00:09:36.966 "data_offset": 2048, 00:09:36.966 "data_size": 63488 00:09:36.966 }, 00:09:36.966 { 00:09:36.966 "name": null, 00:09:36.966 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:36.966 "is_configured": false, 00:09:36.966 "data_offset": 2048, 00:09:36.966 "data_size": 63488 00:09:36.966 }, 00:09:36.966 { 00:09:36.966 "name": null, 00:09:36.966 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:36.966 "is_configured": false, 00:09:36.966 "data_offset": 2048, 00:09:36.966 "data_size": 63488 00:09:36.966 } 00:09:36.966 ] 00:09:36.966 }' 00:09:36.966 16:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.966 16:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.535 [2024-11-28 16:22:29.014241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:37.535 [2024-11-28 16:22:29.014303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.535 [2024-11-28 16:22:29.014325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:09:37.535 [2024-11-28 16:22:29.014334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.535 [2024-11-28 16:22:29.014726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.535 [2024-11-28 16:22:29.014741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:37.535 [2024-11-28 16:22:29.014820] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:37.535 [2024-11-28 16:22:29.014857] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:37.535 pt2 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.535 [2024-11-28 16:22:29.026203] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.535 "name": "raid_bdev1", 00:09:37.535 "uuid": "6bfb83d2-c2d4-4999-870d-72c0f18b6f97", 00:09:37.535 "strip_size_kb": 64, 00:09:37.535 "state": "configuring", 00:09:37.535 "raid_level": "raid0", 00:09:37.535 "superblock": true, 00:09:37.535 "num_base_bdevs": 4, 00:09:37.535 "num_base_bdevs_discovered": 1, 00:09:37.535 "num_base_bdevs_operational": 4, 00:09:37.535 "base_bdevs_list": [ 00:09:37.535 { 00:09:37.535 "name": "pt1", 00:09:37.535 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:37.535 "is_configured": true, 00:09:37.535 "data_offset": 2048, 00:09:37.535 "data_size": 63488 00:09:37.535 }, 00:09:37.535 { 00:09:37.535 "name": null, 00:09:37.535 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:37.535 "is_configured": false, 00:09:37.535 "data_offset": 0, 00:09:37.535 "data_size": 63488 00:09:37.535 }, 00:09:37.535 { 00:09:37.535 "name": null, 00:09:37.535 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:37.535 "is_configured": false, 00:09:37.535 "data_offset": 2048, 00:09:37.535 "data_size": 63488 00:09:37.535 }, 00:09:37.535 { 00:09:37.535 "name": null, 00:09:37.535 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:37.535 "is_configured": false, 00:09:37.535 "data_offset": 2048, 00:09:37.535 "data_size": 63488 00:09:37.535 } 00:09:37.535 ] 00:09:37.535 }' 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.535 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.796 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:37.796 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:37.796 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:37.796 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.796 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.796 [2024-11-28 16:22:29.457466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:37.796 [2024-11-28 16:22:29.457593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.796 [2024-11-28 16:22:29.457627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:37.796 [2024-11-28 16:22:29.457656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.796 [2024-11-28 16:22:29.458076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.796 [2024-11-28 16:22:29.458132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:37.796 [2024-11-28 16:22:29.458232] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:37.796 [2024-11-28 16:22:29.458285] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:37.796 pt2 00:09:37.796 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.796 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:37.796 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:37.796 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:37.796 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.796 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.796 [2024-11-28 16:22:29.469389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:37.796 [2024-11-28 16:22:29.469481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.796 [2024-11-28 16:22:29.469515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:37.796 [2024-11-28 16:22:29.469543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.796 [2024-11-28 16:22:29.469886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.796 [2024-11-28 16:22:29.469941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:37.796 [2024-11-28 16:22:29.470023] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:37.796 [2024-11-28 16:22:29.470070] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:37.796 pt3 00:09:37.796 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.796 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:37.796 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:37.796 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:37.796 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.796 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.796 [2024-11-28 16:22:29.481373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:37.796 [2024-11-28 16:22:29.481461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.796 [2024-11-28 16:22:29.481479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:09:37.796 [2024-11-28 16:22:29.481488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.796 [2024-11-28 16:22:29.481774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.796 [2024-11-28 16:22:29.481791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:37.796 [2024-11-28 16:22:29.481860] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:37.796 [2024-11-28 16:22:29.481880] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:37.796 [2024-11-28 16:22:29.481973] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:37.796 [2024-11-28 16:22:29.481985] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:37.796 [2024-11-28 16:22:29.482194] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:37.796 [2024-11-28 16:22:29.482313] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:37.796 [2024-11-28 16:22:29.482323] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:37.796 [2024-11-28 16:22:29.482414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.796 pt4 00:09:37.796 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.796 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:37.796 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:37.796 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:37.796 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:37.797 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.797 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.797 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.797 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.797 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.797 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.797 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.797 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.797 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.797 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.797 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.797 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.797 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.797 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.797 "name": "raid_bdev1", 00:09:37.797 "uuid": "6bfb83d2-c2d4-4999-870d-72c0f18b6f97", 00:09:37.797 "strip_size_kb": 64, 00:09:37.797 "state": "online", 00:09:37.797 "raid_level": "raid0", 00:09:37.797 "superblock": true, 00:09:37.797 "num_base_bdevs": 4, 00:09:37.797 "num_base_bdevs_discovered": 4, 00:09:37.797 "num_base_bdevs_operational": 4, 00:09:37.797 "base_bdevs_list": [ 00:09:37.797 { 00:09:37.797 "name": "pt1", 00:09:37.797 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:37.797 "is_configured": true, 00:09:37.797 "data_offset": 2048, 00:09:37.797 "data_size": 63488 00:09:37.797 }, 00:09:37.797 { 00:09:37.797 "name": "pt2", 00:09:37.797 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:37.797 "is_configured": true, 00:09:37.797 "data_offset": 2048, 00:09:37.797 "data_size": 63488 00:09:37.797 }, 00:09:37.797 { 00:09:37.797 "name": "pt3", 00:09:37.797 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:37.797 "is_configured": true, 00:09:37.797 "data_offset": 2048, 00:09:37.797 "data_size": 63488 00:09:37.797 }, 00:09:37.797 { 00:09:37.797 "name": "pt4", 00:09:37.797 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:37.797 "is_configured": true, 00:09:37.797 "data_offset": 2048, 00:09:37.797 "data_size": 63488 00:09:37.797 } 00:09:37.797 ] 00:09:37.797 }' 00:09:37.797 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.797 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.367 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:38.367 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:38.367 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:38.367 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:38.367 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:38.367 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:38.367 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:38.367 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.367 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.367 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:38.367 [2024-11-28 16:22:29.944965] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.367 16:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.367 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:38.367 "name": "raid_bdev1", 00:09:38.367 "aliases": [ 00:09:38.367 "6bfb83d2-c2d4-4999-870d-72c0f18b6f97" 00:09:38.367 ], 00:09:38.367 "product_name": "Raid Volume", 00:09:38.367 "block_size": 512, 00:09:38.367 "num_blocks": 253952, 00:09:38.367 "uuid": "6bfb83d2-c2d4-4999-870d-72c0f18b6f97", 00:09:38.367 "assigned_rate_limits": { 00:09:38.367 "rw_ios_per_sec": 0, 00:09:38.367 "rw_mbytes_per_sec": 0, 00:09:38.367 "r_mbytes_per_sec": 0, 00:09:38.367 "w_mbytes_per_sec": 0 00:09:38.367 }, 00:09:38.367 "claimed": false, 00:09:38.367 "zoned": false, 00:09:38.367 "supported_io_types": { 00:09:38.367 "read": true, 00:09:38.367 "write": true, 00:09:38.367 "unmap": true, 00:09:38.367 "flush": true, 00:09:38.367 "reset": true, 00:09:38.367 "nvme_admin": false, 00:09:38.367 "nvme_io": false, 00:09:38.367 "nvme_io_md": false, 00:09:38.367 "write_zeroes": true, 00:09:38.367 "zcopy": false, 00:09:38.367 "get_zone_info": false, 00:09:38.367 "zone_management": false, 00:09:38.367 "zone_append": false, 00:09:38.367 "compare": false, 00:09:38.367 "compare_and_write": false, 00:09:38.367 "abort": false, 00:09:38.367 "seek_hole": false, 00:09:38.367 "seek_data": false, 00:09:38.367 "copy": false, 00:09:38.367 "nvme_iov_md": false 00:09:38.367 }, 00:09:38.367 "memory_domains": [ 00:09:38.367 { 00:09:38.367 "dma_device_id": "system", 00:09:38.367 "dma_device_type": 1 00:09:38.367 }, 00:09:38.367 { 00:09:38.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.367 "dma_device_type": 2 00:09:38.367 }, 00:09:38.367 { 00:09:38.367 "dma_device_id": "system", 00:09:38.367 "dma_device_type": 1 00:09:38.367 }, 00:09:38.367 { 00:09:38.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.367 "dma_device_type": 2 00:09:38.367 }, 00:09:38.367 { 00:09:38.367 "dma_device_id": "system", 00:09:38.367 "dma_device_type": 1 00:09:38.367 }, 00:09:38.367 { 00:09:38.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.367 "dma_device_type": 2 00:09:38.367 }, 00:09:38.367 { 00:09:38.367 "dma_device_id": "system", 00:09:38.367 "dma_device_type": 1 00:09:38.367 }, 00:09:38.367 { 00:09:38.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.367 "dma_device_type": 2 00:09:38.367 } 00:09:38.367 ], 00:09:38.367 "driver_specific": { 00:09:38.367 "raid": { 00:09:38.367 "uuid": "6bfb83d2-c2d4-4999-870d-72c0f18b6f97", 00:09:38.367 "strip_size_kb": 64, 00:09:38.367 "state": "online", 00:09:38.367 "raid_level": "raid0", 00:09:38.367 "superblock": true, 00:09:38.367 "num_base_bdevs": 4, 00:09:38.367 "num_base_bdevs_discovered": 4, 00:09:38.367 "num_base_bdevs_operational": 4, 00:09:38.367 "base_bdevs_list": [ 00:09:38.367 { 00:09:38.367 "name": "pt1", 00:09:38.367 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:38.367 "is_configured": true, 00:09:38.367 "data_offset": 2048, 00:09:38.367 "data_size": 63488 00:09:38.367 }, 00:09:38.367 { 00:09:38.367 "name": "pt2", 00:09:38.367 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:38.367 "is_configured": true, 00:09:38.367 "data_offset": 2048, 00:09:38.367 "data_size": 63488 00:09:38.367 }, 00:09:38.367 { 00:09:38.367 "name": "pt3", 00:09:38.367 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:38.367 "is_configured": true, 00:09:38.367 "data_offset": 2048, 00:09:38.367 "data_size": 63488 00:09:38.367 }, 00:09:38.367 { 00:09:38.367 "name": "pt4", 00:09:38.367 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:38.367 "is_configured": true, 00:09:38.367 "data_offset": 2048, 00:09:38.367 "data_size": 63488 00:09:38.367 } 00:09:38.367 ] 00:09:38.367 } 00:09:38.367 } 00:09:38.368 }' 00:09:38.368 16:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:38.368 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:38.368 pt2 00:09:38.368 pt3 00:09:38.368 pt4' 00:09:38.368 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.368 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:38.368 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.368 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:38.368 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.368 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.368 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.368 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.368 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.368 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.368 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.627 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.627 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:38.627 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.627 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.627 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.627 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.627 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.627 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.627 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:38.627 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.627 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.627 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.627 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:38.628 [2024-11-28 16:22:30.260320] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6bfb83d2-c2d4-4999-870d-72c0f18b6f97 '!=' 6bfb83d2-c2d4-4999-870d-72c0f18b6f97 ']' 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81673 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81673 ']' 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81673 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81673 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81673' 00:09:38.628 killing process with pid 81673 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 81673 00:09:38.628 [2024-11-28 16:22:30.346009] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:38.628 [2024-11-28 16:22:30.346149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.628 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 81673 00:09:38.628 [2024-11-28 16:22:30.346248] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:38.628 [2024-11-28 16:22:30.346262] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:38.628 [2024-11-28 16:22:30.389727] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:38.887 16:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:38.888 00:09:38.888 real 0m4.088s 00:09:38.888 user 0m6.448s 00:09:38.888 sys 0m0.875s 00:09:38.888 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.888 16:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.888 ************************************ 00:09:38.888 END TEST raid_superblock_test 00:09:38.888 ************************************ 00:09:39.148 16:22:30 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:09:39.148 16:22:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:39.148 16:22:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.148 16:22:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:39.148 ************************************ 00:09:39.148 START TEST raid_read_error_test 00:09:39.148 ************************************ 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7voS6ps57K 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81916 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81916 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 81916 ']' 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:39.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:39.148 16:22:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.148 [2024-11-28 16:22:30.808269] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:39.148 [2024-11-28 16:22:30.808386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81916 ] 00:09:39.408 [2024-11-28 16:22:30.962572] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.408 [2024-11-28 16:22:31.005764] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.408 [2024-11-28 16:22:31.047077] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.408 [2024-11-28 16:22:31.047123] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.979 BaseBdev1_malloc 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.979 true 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.979 [2024-11-28 16:22:31.656417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:39.979 [2024-11-28 16:22:31.656476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.979 [2024-11-28 16:22:31.656502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:39.979 [2024-11-28 16:22:31.656513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.979 [2024-11-28 16:22:31.658622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.979 [2024-11-28 16:22:31.658660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:39.979 BaseBdev1 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.979 BaseBdev2_malloc 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.979 true 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.979 [2024-11-28 16:22:31.706670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:39.979 [2024-11-28 16:22:31.706762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.979 [2024-11-28 16:22:31.706783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:39.979 [2024-11-28 16:22:31.706791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.979 [2024-11-28 16:22:31.708765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.979 [2024-11-28 16:22:31.708801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:39.979 BaseBdev2 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.979 BaseBdev3_malloc 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.979 true 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.979 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.979 [2024-11-28 16:22:31.747015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:39.979 [2024-11-28 16:22:31.747061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.979 [2024-11-28 16:22:31.747094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:39.979 [2024-11-28 16:22:31.747102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.239 [2024-11-28 16:22:31.749057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.239 [2024-11-28 16:22:31.749140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:40.239 BaseBdev3 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.240 BaseBdev4_malloc 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.240 true 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.240 [2024-11-28 16:22:31.787264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:40.240 [2024-11-28 16:22:31.787308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.240 [2024-11-28 16:22:31.787344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:40.240 [2024-11-28 16:22:31.787352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.240 [2024-11-28 16:22:31.789366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.240 [2024-11-28 16:22:31.789433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:40.240 BaseBdev4 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.240 [2024-11-28 16:22:31.799296] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.240 [2024-11-28 16:22:31.801133] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.240 [2024-11-28 16:22:31.801215] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:40.240 [2024-11-28 16:22:31.801266] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:40.240 [2024-11-28 16:22:31.801443] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:09:40.240 [2024-11-28 16:22:31.801454] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:40.240 [2024-11-28 16:22:31.801684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:40.240 [2024-11-28 16:22:31.801810] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:09:40.240 [2024-11-28 16:22:31.801822] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:09:40.240 [2024-11-28 16:22:31.801958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.240 "name": "raid_bdev1", 00:09:40.240 "uuid": "f7bed5af-adf2-459a-a863-7822826e0940", 00:09:40.240 "strip_size_kb": 64, 00:09:40.240 "state": "online", 00:09:40.240 "raid_level": "raid0", 00:09:40.240 "superblock": true, 00:09:40.240 "num_base_bdevs": 4, 00:09:40.240 "num_base_bdevs_discovered": 4, 00:09:40.240 "num_base_bdevs_operational": 4, 00:09:40.240 "base_bdevs_list": [ 00:09:40.240 { 00:09:40.240 "name": "BaseBdev1", 00:09:40.240 "uuid": "085636e3-632f-556d-af01-32eafc597219", 00:09:40.240 "is_configured": true, 00:09:40.240 "data_offset": 2048, 00:09:40.240 "data_size": 63488 00:09:40.240 }, 00:09:40.240 { 00:09:40.240 "name": "BaseBdev2", 00:09:40.240 "uuid": "74896b7c-387a-54a8-8172-89aa019ffcd7", 00:09:40.240 "is_configured": true, 00:09:40.240 "data_offset": 2048, 00:09:40.240 "data_size": 63488 00:09:40.240 }, 00:09:40.240 { 00:09:40.240 "name": "BaseBdev3", 00:09:40.240 "uuid": "a628708b-d38d-50c7-a56d-7a5553544059", 00:09:40.240 "is_configured": true, 00:09:40.240 "data_offset": 2048, 00:09:40.240 "data_size": 63488 00:09:40.240 }, 00:09:40.240 { 00:09:40.240 "name": "BaseBdev4", 00:09:40.240 "uuid": "70e31a75-2ee6-5069-a2cc-fdc89d1f4bd1", 00:09:40.240 "is_configured": true, 00:09:40.240 "data_offset": 2048, 00:09:40.240 "data_size": 63488 00:09:40.240 } 00:09:40.240 ] 00:09:40.240 }' 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.240 16:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.500 16:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:40.500 16:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:40.760 [2024-11-28 16:22:32.342691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.699 "name": "raid_bdev1", 00:09:41.699 "uuid": "f7bed5af-adf2-459a-a863-7822826e0940", 00:09:41.699 "strip_size_kb": 64, 00:09:41.699 "state": "online", 00:09:41.699 "raid_level": "raid0", 00:09:41.699 "superblock": true, 00:09:41.699 "num_base_bdevs": 4, 00:09:41.699 "num_base_bdevs_discovered": 4, 00:09:41.699 "num_base_bdevs_operational": 4, 00:09:41.699 "base_bdevs_list": [ 00:09:41.699 { 00:09:41.699 "name": "BaseBdev1", 00:09:41.699 "uuid": "085636e3-632f-556d-af01-32eafc597219", 00:09:41.699 "is_configured": true, 00:09:41.699 "data_offset": 2048, 00:09:41.699 "data_size": 63488 00:09:41.699 }, 00:09:41.699 { 00:09:41.699 "name": "BaseBdev2", 00:09:41.699 "uuid": "74896b7c-387a-54a8-8172-89aa019ffcd7", 00:09:41.699 "is_configured": true, 00:09:41.699 "data_offset": 2048, 00:09:41.699 "data_size": 63488 00:09:41.699 }, 00:09:41.699 { 00:09:41.699 "name": "BaseBdev3", 00:09:41.699 "uuid": "a628708b-d38d-50c7-a56d-7a5553544059", 00:09:41.699 "is_configured": true, 00:09:41.699 "data_offset": 2048, 00:09:41.699 "data_size": 63488 00:09:41.699 }, 00:09:41.699 { 00:09:41.699 "name": "BaseBdev4", 00:09:41.699 "uuid": "70e31a75-2ee6-5069-a2cc-fdc89d1f4bd1", 00:09:41.699 "is_configured": true, 00:09:41.699 "data_offset": 2048, 00:09:41.699 "data_size": 63488 00:09:41.699 } 00:09:41.699 ] 00:09:41.699 }' 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.699 16:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.268 16:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:42.268 16:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.268 16:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.268 [2024-11-28 16:22:33.742339] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:42.268 [2024-11-28 16:22:33.742435] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.268 [2024-11-28 16:22:33.744846] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.268 [2024-11-28 16:22:33.744945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.269 [2024-11-28 16:22:33.745025] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.269 [2024-11-28 16:22:33.745077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:09:42.269 { 00:09:42.269 "results": [ 00:09:42.269 { 00:09:42.269 "job": "raid_bdev1", 00:09:42.269 "core_mask": "0x1", 00:09:42.269 "workload": "randrw", 00:09:42.269 "percentage": 50, 00:09:42.269 "status": "finished", 00:09:42.269 "queue_depth": 1, 00:09:42.269 "io_size": 131072, 00:09:42.269 "runtime": 1.400655, 00:09:42.269 "iops": 17189.814765234838, 00:09:42.269 "mibps": 2148.7268456543547, 00:09:42.269 "io_failed": 1, 00:09:42.269 "io_timeout": 0, 00:09:42.269 "avg_latency_us": 80.70329217524849, 00:09:42.269 "min_latency_us": 24.370305676855896, 00:09:42.269 "max_latency_us": 1380.8349344978167 00:09:42.269 } 00:09:42.269 ], 00:09:42.269 "core_count": 1 00:09:42.269 } 00:09:42.269 16:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.269 16:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81916 00:09:42.269 16:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 81916 ']' 00:09:42.269 16:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 81916 00:09:42.269 16:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:42.269 16:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:42.269 16:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81916 00:09:42.269 16:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:42.269 16:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:42.269 16:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81916' 00:09:42.269 killing process with pid 81916 00:09:42.269 16:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 81916 00:09:42.269 [2024-11-28 16:22:33.788235] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:42.269 16:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 81916 00:09:42.269 [2024-11-28 16:22:33.822988] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:42.529 16:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:42.529 16:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7voS6ps57K 00:09:42.529 16:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:42.529 16:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:42.529 16:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:42.529 ************************************ 00:09:42.529 END TEST raid_read_error_test 00:09:42.529 ************************************ 00:09:42.529 16:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:42.529 16:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:42.529 16:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:42.529 00:09:42.529 real 0m3.358s 00:09:42.529 user 0m4.259s 00:09:42.529 sys 0m0.534s 00:09:42.529 16:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.529 16:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.529 16:22:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:09:42.529 16:22:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:42.529 16:22:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.529 16:22:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:42.529 ************************************ 00:09:42.529 START TEST raid_write_error_test 00:09:42.529 ************************************ 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fwMFBEHOj3 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82050 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82050 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 82050 ']' 00:09:42.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:42.529 16:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.529 [2024-11-28 16:22:34.239162] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:42.529 [2024-11-28 16:22:34.239296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82050 ] 00:09:42.789 [2024-11-28 16:22:34.400712] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.789 [2024-11-28 16:22:34.443696] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.789 [2024-11-28 16:22:34.484507] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.789 [2024-11-28 16:22:34.484620] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.360 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:43.360 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:43.360 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.360 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:43.360 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.360 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.360 BaseBdev1_malloc 00:09:43.360 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.360 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:43.360 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.360 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.360 true 00:09:43.360 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.360 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:43.360 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.360 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.360 [2024-11-28 16:22:35.089913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:43.360 [2024-11-28 16:22:35.089976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.360 [2024-11-28 16:22:35.089997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:43.360 [2024-11-28 16:22:35.090006] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.360 [2024-11-28 16:22:35.092024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.360 [2024-11-28 16:22:35.092059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:43.360 BaseBdev1 00:09:43.360 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.360 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.360 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:43.360 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.360 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.360 BaseBdev2_malloc 00:09:43.360 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.361 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:43.361 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.361 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.622 true 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.622 [2024-11-28 16:22:35.140731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:43.622 [2024-11-28 16:22:35.140819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.622 [2024-11-28 16:22:35.140855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:43.622 [2024-11-28 16:22:35.140865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.622 [2024-11-28 16:22:35.142816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.622 [2024-11-28 16:22:35.142861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:43.622 BaseBdev2 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.622 BaseBdev3_malloc 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.622 true 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.622 [2024-11-28 16:22:35.181053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:43.622 [2024-11-28 16:22:35.181139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.622 [2024-11-28 16:22:35.181176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:43.622 [2024-11-28 16:22:35.181185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.622 [2024-11-28 16:22:35.183194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.622 [2024-11-28 16:22:35.183228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:43.622 BaseBdev3 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.622 BaseBdev4_malloc 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.622 true 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.622 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.622 [2024-11-28 16:22:35.221357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:43.622 [2024-11-28 16:22:35.221404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.622 [2024-11-28 16:22:35.221440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:43.622 [2024-11-28 16:22:35.221449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.622 [2024-11-28 16:22:35.223406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.622 [2024-11-28 16:22:35.223441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:43.622 BaseBdev4 00:09:43.623 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.623 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:43.623 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.623 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.623 [2024-11-28 16:22:35.233382] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:43.623 [2024-11-28 16:22:35.235133] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:43.623 [2024-11-28 16:22:35.235218] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:43.623 [2024-11-28 16:22:35.235270] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:43.623 [2024-11-28 16:22:35.235464] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:09:43.623 [2024-11-28 16:22:35.235475] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:43.623 [2024-11-28 16:22:35.235728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:43.623 [2024-11-28 16:22:35.235881] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:09:43.623 [2024-11-28 16:22:35.235895] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:09:43.623 [2024-11-28 16:22:35.236027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.623 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.623 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:43.623 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.623 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.623 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:43.623 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.623 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:43.623 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.623 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.623 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.623 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.623 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.623 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.623 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.623 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.623 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.623 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.623 "name": "raid_bdev1", 00:09:43.623 "uuid": "3fedb37e-3e2f-496f-9735-151fd67ad15a", 00:09:43.623 "strip_size_kb": 64, 00:09:43.623 "state": "online", 00:09:43.623 "raid_level": "raid0", 00:09:43.623 "superblock": true, 00:09:43.623 "num_base_bdevs": 4, 00:09:43.623 "num_base_bdevs_discovered": 4, 00:09:43.623 "num_base_bdevs_operational": 4, 00:09:43.623 "base_bdevs_list": [ 00:09:43.623 { 00:09:43.623 "name": "BaseBdev1", 00:09:43.623 "uuid": "b320488d-43dd-5ba3-83e2-b985998e64fc", 00:09:43.623 "is_configured": true, 00:09:43.623 "data_offset": 2048, 00:09:43.623 "data_size": 63488 00:09:43.623 }, 00:09:43.623 { 00:09:43.623 "name": "BaseBdev2", 00:09:43.623 "uuid": "c4ae4fdc-334c-5c5d-b845-b22ff93c458f", 00:09:43.623 "is_configured": true, 00:09:43.623 "data_offset": 2048, 00:09:43.623 "data_size": 63488 00:09:43.623 }, 00:09:43.623 { 00:09:43.623 "name": "BaseBdev3", 00:09:43.623 "uuid": "b1beae35-6fa5-5d92-8909-bfe5d7ace3d7", 00:09:43.623 "is_configured": true, 00:09:43.623 "data_offset": 2048, 00:09:43.623 "data_size": 63488 00:09:43.623 }, 00:09:43.623 { 00:09:43.623 "name": "BaseBdev4", 00:09:43.623 "uuid": "d87391d0-3346-50b8-a842-8488c6a8eeee", 00:09:43.623 "is_configured": true, 00:09:43.623 "data_offset": 2048, 00:09:43.623 "data_size": 63488 00:09:43.623 } 00:09:43.623 ] 00:09:43.623 }' 00:09:43.623 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.623 16:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.192 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:44.192 16:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:44.192 [2024-11-28 16:22:35.796746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.132 "name": "raid_bdev1", 00:09:45.132 "uuid": "3fedb37e-3e2f-496f-9735-151fd67ad15a", 00:09:45.132 "strip_size_kb": 64, 00:09:45.132 "state": "online", 00:09:45.132 "raid_level": "raid0", 00:09:45.132 "superblock": true, 00:09:45.132 "num_base_bdevs": 4, 00:09:45.132 "num_base_bdevs_discovered": 4, 00:09:45.132 "num_base_bdevs_operational": 4, 00:09:45.132 "base_bdevs_list": [ 00:09:45.132 { 00:09:45.132 "name": "BaseBdev1", 00:09:45.132 "uuid": "b320488d-43dd-5ba3-83e2-b985998e64fc", 00:09:45.132 "is_configured": true, 00:09:45.132 "data_offset": 2048, 00:09:45.132 "data_size": 63488 00:09:45.132 }, 00:09:45.132 { 00:09:45.132 "name": "BaseBdev2", 00:09:45.132 "uuid": "c4ae4fdc-334c-5c5d-b845-b22ff93c458f", 00:09:45.132 "is_configured": true, 00:09:45.132 "data_offset": 2048, 00:09:45.132 "data_size": 63488 00:09:45.132 }, 00:09:45.132 { 00:09:45.132 "name": "BaseBdev3", 00:09:45.132 "uuid": "b1beae35-6fa5-5d92-8909-bfe5d7ace3d7", 00:09:45.132 "is_configured": true, 00:09:45.132 "data_offset": 2048, 00:09:45.132 "data_size": 63488 00:09:45.132 }, 00:09:45.132 { 00:09:45.132 "name": "BaseBdev4", 00:09:45.132 "uuid": "d87391d0-3346-50b8-a842-8488c6a8eeee", 00:09:45.132 "is_configured": true, 00:09:45.132 "data_offset": 2048, 00:09:45.132 "data_size": 63488 00:09:45.132 } 00:09:45.132 ] 00:09:45.132 }' 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.132 16:22:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.701 16:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:45.701 16:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.701 16:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.701 [2024-11-28 16:22:37.168647] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.701 [2024-11-28 16:22:37.168680] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.701 [2024-11-28 16:22:37.171238] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.702 [2024-11-28 16:22:37.171337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.702 [2024-11-28 16:22:37.171392] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.702 [2024-11-28 16:22:37.171401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:09:45.702 { 00:09:45.702 "results": [ 00:09:45.702 { 00:09:45.702 "job": "raid_bdev1", 00:09:45.702 "core_mask": "0x1", 00:09:45.702 "workload": "randrw", 00:09:45.702 "percentage": 50, 00:09:45.702 "status": "finished", 00:09:45.702 "queue_depth": 1, 00:09:45.702 "io_size": 131072, 00:09:45.702 "runtime": 1.37274, 00:09:45.702 "iops": 17164.21172254032, 00:09:45.702 "mibps": 2145.52646531754, 00:09:45.702 "io_failed": 1, 00:09:45.702 "io_timeout": 0, 00:09:45.702 "avg_latency_us": 80.93015283564806, 00:09:45.702 "min_latency_us": 25.4882096069869, 00:09:45.702 "max_latency_us": 1409.4532751091704 00:09:45.702 } 00:09:45.702 ], 00:09:45.702 "core_count": 1 00:09:45.702 } 00:09:45.702 16:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.702 16:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82050 00:09:45.702 16:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 82050 ']' 00:09:45.702 16:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 82050 00:09:45.702 16:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:45.702 16:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:45.702 16:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82050 00:09:45.702 16:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:45.702 16:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:45.702 killing process with pid 82050 00:09:45.702 16:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82050' 00:09:45.702 16:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 82050 00:09:45.702 [2024-11-28 16:22:37.206338] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:45.702 16:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 82050 00:09:45.702 [2024-11-28 16:22:37.241762] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:45.985 16:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fwMFBEHOj3 00:09:45.985 16:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:45.985 16:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:45.985 16:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:45.985 16:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:45.985 16:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:45.985 16:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:45.985 16:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:45.985 ************************************ 00:09:45.985 END TEST raid_write_error_test 00:09:45.985 ************************************ 00:09:45.985 00:09:45.985 real 0m3.349s 00:09:45.985 user 0m4.232s 00:09:45.985 sys 0m0.537s 00:09:45.985 16:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.985 16:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.985 16:22:37 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:45.985 16:22:37 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:09:45.985 16:22:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:45.985 16:22:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.985 16:22:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:45.985 ************************************ 00:09:45.985 START TEST raid_state_function_test 00:09:45.985 ************************************ 00:09:45.985 16:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:09:45.985 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:45.985 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:45.985 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82183 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:45.986 Process raid pid: 82183 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82183' 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82183 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82183 ']' 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:45.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:45.986 16:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.986 [2024-11-28 16:22:37.644652] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:45.986 [2024-11-28 16:22:37.644777] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.270 [2024-11-28 16:22:37.799740] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.270 [2024-11-28 16:22:37.842970] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.270 [2024-11-28 16:22:37.884546] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.270 [2024-11-28 16:22:37.884582] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.838 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:46.838 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:46.838 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:46.838 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.838 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.838 [2024-11-28 16:22:38.469489] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:46.838 [2024-11-28 16:22:38.469589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:46.838 [2024-11-28 16:22:38.469649] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:46.838 [2024-11-28 16:22:38.469682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:46.838 [2024-11-28 16:22:38.469711] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:46.838 [2024-11-28 16:22:38.469759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:46.838 [2024-11-28 16:22:38.469784] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:46.838 [2024-11-28 16:22:38.469820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:46.838 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.838 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:46.838 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.838 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.838 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.838 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.838 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.838 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.838 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.838 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.838 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.838 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.838 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.838 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.838 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.838 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.838 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.838 "name": "Existed_Raid", 00:09:46.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.838 "strip_size_kb": 64, 00:09:46.838 "state": "configuring", 00:09:46.838 "raid_level": "concat", 00:09:46.838 "superblock": false, 00:09:46.838 "num_base_bdevs": 4, 00:09:46.838 "num_base_bdevs_discovered": 0, 00:09:46.838 "num_base_bdevs_operational": 4, 00:09:46.838 "base_bdevs_list": [ 00:09:46.838 { 00:09:46.838 "name": "BaseBdev1", 00:09:46.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.838 "is_configured": false, 00:09:46.838 "data_offset": 0, 00:09:46.838 "data_size": 0 00:09:46.838 }, 00:09:46.838 { 00:09:46.838 "name": "BaseBdev2", 00:09:46.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.838 "is_configured": false, 00:09:46.838 "data_offset": 0, 00:09:46.839 "data_size": 0 00:09:46.839 }, 00:09:46.839 { 00:09:46.839 "name": "BaseBdev3", 00:09:46.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.839 "is_configured": false, 00:09:46.839 "data_offset": 0, 00:09:46.839 "data_size": 0 00:09:46.839 }, 00:09:46.839 { 00:09:46.839 "name": "BaseBdev4", 00:09:46.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.839 "is_configured": false, 00:09:46.839 "data_offset": 0, 00:09:46.839 "data_size": 0 00:09:46.839 } 00:09:46.839 ] 00:09:46.839 }' 00:09:46.839 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.839 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.099 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:47.099 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.099 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.099 [2024-11-28 16:22:38.864721] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.099 [2024-11-28 16:22:38.864801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.358 [2024-11-28 16:22:38.876731] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:47.358 [2024-11-28 16:22:38.876815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:47.358 [2024-11-28 16:22:38.876857] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:47.358 [2024-11-28 16:22:38.876881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:47.358 [2024-11-28 16:22:38.876899] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:47.358 [2024-11-28 16:22:38.876919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:47.358 [2024-11-28 16:22:38.876936] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:47.358 [2024-11-28 16:22:38.876957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.358 [2024-11-28 16:22:38.897369] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:47.358 BaseBdev1 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.358 [ 00:09:47.358 { 00:09:47.358 "name": "BaseBdev1", 00:09:47.358 "aliases": [ 00:09:47.358 "8ae0db77-91ff-4473-932a-78e4ce9ff69c" 00:09:47.358 ], 00:09:47.358 "product_name": "Malloc disk", 00:09:47.358 "block_size": 512, 00:09:47.358 "num_blocks": 65536, 00:09:47.358 "uuid": "8ae0db77-91ff-4473-932a-78e4ce9ff69c", 00:09:47.358 "assigned_rate_limits": { 00:09:47.358 "rw_ios_per_sec": 0, 00:09:47.358 "rw_mbytes_per_sec": 0, 00:09:47.358 "r_mbytes_per_sec": 0, 00:09:47.358 "w_mbytes_per_sec": 0 00:09:47.358 }, 00:09:47.358 "claimed": true, 00:09:47.358 "claim_type": "exclusive_write", 00:09:47.358 "zoned": false, 00:09:47.358 "supported_io_types": { 00:09:47.358 "read": true, 00:09:47.358 "write": true, 00:09:47.358 "unmap": true, 00:09:47.358 "flush": true, 00:09:47.358 "reset": true, 00:09:47.358 "nvme_admin": false, 00:09:47.358 "nvme_io": false, 00:09:47.358 "nvme_io_md": false, 00:09:47.358 "write_zeroes": true, 00:09:47.358 "zcopy": true, 00:09:47.358 "get_zone_info": false, 00:09:47.358 "zone_management": false, 00:09:47.358 "zone_append": false, 00:09:47.358 "compare": false, 00:09:47.358 "compare_and_write": false, 00:09:47.358 "abort": true, 00:09:47.358 "seek_hole": false, 00:09:47.358 "seek_data": false, 00:09:47.358 "copy": true, 00:09:47.358 "nvme_iov_md": false 00:09:47.358 }, 00:09:47.358 "memory_domains": [ 00:09:47.358 { 00:09:47.358 "dma_device_id": "system", 00:09:47.358 "dma_device_type": 1 00:09:47.358 }, 00:09:47.358 { 00:09:47.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.358 "dma_device_type": 2 00:09:47.358 } 00:09:47.358 ], 00:09:47.358 "driver_specific": {} 00:09:47.358 } 00:09:47.358 ] 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.358 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.359 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.359 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.359 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.359 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.359 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.359 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.359 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.359 "name": "Existed_Raid", 00:09:47.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.359 "strip_size_kb": 64, 00:09:47.359 "state": "configuring", 00:09:47.359 "raid_level": "concat", 00:09:47.359 "superblock": false, 00:09:47.359 "num_base_bdevs": 4, 00:09:47.359 "num_base_bdevs_discovered": 1, 00:09:47.359 "num_base_bdevs_operational": 4, 00:09:47.359 "base_bdevs_list": [ 00:09:47.359 { 00:09:47.359 "name": "BaseBdev1", 00:09:47.359 "uuid": "8ae0db77-91ff-4473-932a-78e4ce9ff69c", 00:09:47.359 "is_configured": true, 00:09:47.359 "data_offset": 0, 00:09:47.359 "data_size": 65536 00:09:47.359 }, 00:09:47.359 { 00:09:47.359 "name": "BaseBdev2", 00:09:47.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.359 "is_configured": false, 00:09:47.359 "data_offset": 0, 00:09:47.359 "data_size": 0 00:09:47.359 }, 00:09:47.359 { 00:09:47.359 "name": "BaseBdev3", 00:09:47.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.359 "is_configured": false, 00:09:47.359 "data_offset": 0, 00:09:47.359 "data_size": 0 00:09:47.359 }, 00:09:47.359 { 00:09:47.359 "name": "BaseBdev4", 00:09:47.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.359 "is_configured": false, 00:09:47.359 "data_offset": 0, 00:09:47.359 "data_size": 0 00:09:47.359 } 00:09:47.359 ] 00:09:47.359 }' 00:09:47.359 16:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.359 16:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.617 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.618 [2024-11-28 16:22:39.324650] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.618 [2024-11-28 16:22:39.324741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.618 [2024-11-28 16:22:39.336678] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:47.618 [2024-11-28 16:22:39.338494] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:47.618 [2024-11-28 16:22:39.338564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:47.618 [2024-11-28 16:22:39.338590] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:47.618 [2024-11-28 16:22:39.338627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:47.618 [2024-11-28 16:22:39.338644] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:47.618 [2024-11-28 16:22:39.338663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.618 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.878 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.878 "name": "Existed_Raid", 00:09:47.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.878 "strip_size_kb": 64, 00:09:47.878 "state": "configuring", 00:09:47.878 "raid_level": "concat", 00:09:47.878 "superblock": false, 00:09:47.878 "num_base_bdevs": 4, 00:09:47.878 "num_base_bdevs_discovered": 1, 00:09:47.878 "num_base_bdevs_operational": 4, 00:09:47.878 "base_bdevs_list": [ 00:09:47.878 { 00:09:47.878 "name": "BaseBdev1", 00:09:47.878 "uuid": "8ae0db77-91ff-4473-932a-78e4ce9ff69c", 00:09:47.878 "is_configured": true, 00:09:47.878 "data_offset": 0, 00:09:47.878 "data_size": 65536 00:09:47.878 }, 00:09:47.878 { 00:09:47.878 "name": "BaseBdev2", 00:09:47.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.878 "is_configured": false, 00:09:47.878 "data_offset": 0, 00:09:47.878 "data_size": 0 00:09:47.878 }, 00:09:47.878 { 00:09:47.878 "name": "BaseBdev3", 00:09:47.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.878 "is_configured": false, 00:09:47.878 "data_offset": 0, 00:09:47.878 "data_size": 0 00:09:47.878 }, 00:09:47.878 { 00:09:47.878 "name": "BaseBdev4", 00:09:47.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.878 "is_configured": false, 00:09:47.878 "data_offset": 0, 00:09:47.878 "data_size": 0 00:09:47.878 } 00:09:47.878 ] 00:09:47.878 }' 00:09:47.878 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.878 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.138 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:48.138 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.138 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.138 [2024-11-28 16:22:39.785117] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:48.138 BaseBdev2 00:09:48.138 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.138 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:48.138 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:48.138 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:48.138 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.139 [ 00:09:48.139 { 00:09:48.139 "name": "BaseBdev2", 00:09:48.139 "aliases": [ 00:09:48.139 "ba68812a-559c-4424-9435-40dc2121f1f6" 00:09:48.139 ], 00:09:48.139 "product_name": "Malloc disk", 00:09:48.139 "block_size": 512, 00:09:48.139 "num_blocks": 65536, 00:09:48.139 "uuid": "ba68812a-559c-4424-9435-40dc2121f1f6", 00:09:48.139 "assigned_rate_limits": { 00:09:48.139 "rw_ios_per_sec": 0, 00:09:48.139 "rw_mbytes_per_sec": 0, 00:09:48.139 "r_mbytes_per_sec": 0, 00:09:48.139 "w_mbytes_per_sec": 0 00:09:48.139 }, 00:09:48.139 "claimed": true, 00:09:48.139 "claim_type": "exclusive_write", 00:09:48.139 "zoned": false, 00:09:48.139 "supported_io_types": { 00:09:48.139 "read": true, 00:09:48.139 "write": true, 00:09:48.139 "unmap": true, 00:09:48.139 "flush": true, 00:09:48.139 "reset": true, 00:09:48.139 "nvme_admin": false, 00:09:48.139 "nvme_io": false, 00:09:48.139 "nvme_io_md": false, 00:09:48.139 "write_zeroes": true, 00:09:48.139 "zcopy": true, 00:09:48.139 "get_zone_info": false, 00:09:48.139 "zone_management": false, 00:09:48.139 "zone_append": false, 00:09:48.139 "compare": false, 00:09:48.139 "compare_and_write": false, 00:09:48.139 "abort": true, 00:09:48.139 "seek_hole": false, 00:09:48.139 "seek_data": false, 00:09:48.139 "copy": true, 00:09:48.139 "nvme_iov_md": false 00:09:48.139 }, 00:09:48.139 "memory_domains": [ 00:09:48.139 { 00:09:48.139 "dma_device_id": "system", 00:09:48.139 "dma_device_type": 1 00:09:48.139 }, 00:09:48.139 { 00:09:48.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.139 "dma_device_type": 2 00:09:48.139 } 00:09:48.139 ], 00:09:48.139 "driver_specific": {} 00:09:48.139 } 00:09:48.139 ] 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.139 "name": "Existed_Raid", 00:09:48.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.139 "strip_size_kb": 64, 00:09:48.139 "state": "configuring", 00:09:48.139 "raid_level": "concat", 00:09:48.139 "superblock": false, 00:09:48.139 "num_base_bdevs": 4, 00:09:48.139 "num_base_bdevs_discovered": 2, 00:09:48.139 "num_base_bdevs_operational": 4, 00:09:48.139 "base_bdevs_list": [ 00:09:48.139 { 00:09:48.139 "name": "BaseBdev1", 00:09:48.139 "uuid": "8ae0db77-91ff-4473-932a-78e4ce9ff69c", 00:09:48.139 "is_configured": true, 00:09:48.139 "data_offset": 0, 00:09:48.139 "data_size": 65536 00:09:48.139 }, 00:09:48.139 { 00:09:48.139 "name": "BaseBdev2", 00:09:48.139 "uuid": "ba68812a-559c-4424-9435-40dc2121f1f6", 00:09:48.139 "is_configured": true, 00:09:48.139 "data_offset": 0, 00:09:48.139 "data_size": 65536 00:09:48.139 }, 00:09:48.139 { 00:09:48.139 "name": "BaseBdev3", 00:09:48.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.139 "is_configured": false, 00:09:48.139 "data_offset": 0, 00:09:48.139 "data_size": 0 00:09:48.139 }, 00:09:48.139 { 00:09:48.139 "name": "BaseBdev4", 00:09:48.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.139 "is_configured": false, 00:09:48.139 "data_offset": 0, 00:09:48.139 "data_size": 0 00:09:48.139 } 00:09:48.139 ] 00:09:48.139 }' 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.139 16:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.707 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:48.707 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.707 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.707 [2024-11-28 16:22:40.283111] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:48.707 BaseBdev3 00:09:48.707 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.707 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:48.707 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:48.707 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:48.707 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:48.707 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:48.707 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:48.707 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:48.707 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.707 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.707 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.707 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:48.707 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.707 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.708 [ 00:09:48.708 { 00:09:48.708 "name": "BaseBdev3", 00:09:48.708 "aliases": [ 00:09:48.708 "952e95aa-f742-4527-a739-03c59c1dc9a7" 00:09:48.708 ], 00:09:48.708 "product_name": "Malloc disk", 00:09:48.708 "block_size": 512, 00:09:48.708 "num_blocks": 65536, 00:09:48.708 "uuid": "952e95aa-f742-4527-a739-03c59c1dc9a7", 00:09:48.708 "assigned_rate_limits": { 00:09:48.708 "rw_ios_per_sec": 0, 00:09:48.708 "rw_mbytes_per_sec": 0, 00:09:48.708 "r_mbytes_per_sec": 0, 00:09:48.708 "w_mbytes_per_sec": 0 00:09:48.708 }, 00:09:48.708 "claimed": true, 00:09:48.708 "claim_type": "exclusive_write", 00:09:48.708 "zoned": false, 00:09:48.708 "supported_io_types": { 00:09:48.708 "read": true, 00:09:48.708 "write": true, 00:09:48.708 "unmap": true, 00:09:48.708 "flush": true, 00:09:48.708 "reset": true, 00:09:48.708 "nvme_admin": false, 00:09:48.708 "nvme_io": false, 00:09:48.708 "nvme_io_md": false, 00:09:48.708 "write_zeroes": true, 00:09:48.708 "zcopy": true, 00:09:48.708 "get_zone_info": false, 00:09:48.708 "zone_management": false, 00:09:48.708 "zone_append": false, 00:09:48.708 "compare": false, 00:09:48.708 "compare_and_write": false, 00:09:48.708 "abort": true, 00:09:48.708 "seek_hole": false, 00:09:48.708 "seek_data": false, 00:09:48.708 "copy": true, 00:09:48.708 "nvme_iov_md": false 00:09:48.708 }, 00:09:48.708 "memory_domains": [ 00:09:48.708 { 00:09:48.708 "dma_device_id": "system", 00:09:48.708 "dma_device_type": 1 00:09:48.708 }, 00:09:48.708 { 00:09:48.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.708 "dma_device_type": 2 00:09:48.708 } 00:09:48.708 ], 00:09:48.708 "driver_specific": {} 00:09:48.708 } 00:09:48.708 ] 00:09:48.708 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.708 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:48.708 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:48.708 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:48.708 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:48.708 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.708 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.708 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.708 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.708 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.708 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.708 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.708 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.708 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.708 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.708 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.708 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.708 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.708 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.708 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.708 "name": "Existed_Raid", 00:09:48.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.708 "strip_size_kb": 64, 00:09:48.708 "state": "configuring", 00:09:48.708 "raid_level": "concat", 00:09:48.708 "superblock": false, 00:09:48.708 "num_base_bdevs": 4, 00:09:48.708 "num_base_bdevs_discovered": 3, 00:09:48.708 "num_base_bdevs_operational": 4, 00:09:48.708 "base_bdevs_list": [ 00:09:48.708 { 00:09:48.708 "name": "BaseBdev1", 00:09:48.708 "uuid": "8ae0db77-91ff-4473-932a-78e4ce9ff69c", 00:09:48.708 "is_configured": true, 00:09:48.708 "data_offset": 0, 00:09:48.708 "data_size": 65536 00:09:48.708 }, 00:09:48.708 { 00:09:48.708 "name": "BaseBdev2", 00:09:48.708 "uuid": "ba68812a-559c-4424-9435-40dc2121f1f6", 00:09:48.708 "is_configured": true, 00:09:48.708 "data_offset": 0, 00:09:48.708 "data_size": 65536 00:09:48.708 }, 00:09:48.708 { 00:09:48.708 "name": "BaseBdev3", 00:09:48.708 "uuid": "952e95aa-f742-4527-a739-03c59c1dc9a7", 00:09:48.708 "is_configured": true, 00:09:48.708 "data_offset": 0, 00:09:48.708 "data_size": 65536 00:09:48.708 }, 00:09:48.708 { 00:09:48.708 "name": "BaseBdev4", 00:09:48.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.708 "is_configured": false, 00:09:48.708 "data_offset": 0, 00:09:48.708 "data_size": 0 00:09:48.708 } 00:09:48.708 ] 00:09:48.708 }' 00:09:48.708 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.708 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.278 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:49.278 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.278 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.278 [2024-11-28 16:22:40.757214] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:49.278 [2024-11-28 16:22:40.757338] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:49.278 [2024-11-28 16:22:40.757363] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:49.278 [2024-11-28 16:22:40.757670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:49.278 [2024-11-28 16:22:40.757857] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:49.278 [2024-11-28 16:22:40.757917] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:49.279 [2024-11-28 16:22:40.758185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.279 BaseBdev4 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.279 [ 00:09:49.279 { 00:09:49.279 "name": "BaseBdev4", 00:09:49.279 "aliases": [ 00:09:49.279 "b4f2ec71-1591-4ceb-9f2e-e61200df2c39" 00:09:49.279 ], 00:09:49.279 "product_name": "Malloc disk", 00:09:49.279 "block_size": 512, 00:09:49.279 "num_blocks": 65536, 00:09:49.279 "uuid": "b4f2ec71-1591-4ceb-9f2e-e61200df2c39", 00:09:49.279 "assigned_rate_limits": { 00:09:49.279 "rw_ios_per_sec": 0, 00:09:49.279 "rw_mbytes_per_sec": 0, 00:09:49.279 "r_mbytes_per_sec": 0, 00:09:49.279 "w_mbytes_per_sec": 0 00:09:49.279 }, 00:09:49.279 "claimed": true, 00:09:49.279 "claim_type": "exclusive_write", 00:09:49.279 "zoned": false, 00:09:49.279 "supported_io_types": { 00:09:49.279 "read": true, 00:09:49.279 "write": true, 00:09:49.279 "unmap": true, 00:09:49.279 "flush": true, 00:09:49.279 "reset": true, 00:09:49.279 "nvme_admin": false, 00:09:49.279 "nvme_io": false, 00:09:49.279 "nvme_io_md": false, 00:09:49.279 "write_zeroes": true, 00:09:49.279 "zcopy": true, 00:09:49.279 "get_zone_info": false, 00:09:49.279 "zone_management": false, 00:09:49.279 "zone_append": false, 00:09:49.279 "compare": false, 00:09:49.279 "compare_and_write": false, 00:09:49.279 "abort": true, 00:09:49.279 "seek_hole": false, 00:09:49.279 "seek_data": false, 00:09:49.279 "copy": true, 00:09:49.279 "nvme_iov_md": false 00:09:49.279 }, 00:09:49.279 "memory_domains": [ 00:09:49.279 { 00:09:49.279 "dma_device_id": "system", 00:09:49.279 "dma_device_type": 1 00:09:49.279 }, 00:09:49.279 { 00:09:49.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.279 "dma_device_type": 2 00:09:49.279 } 00:09:49.279 ], 00:09:49.279 "driver_specific": {} 00:09:49.279 } 00:09:49.279 ] 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.279 "name": "Existed_Raid", 00:09:49.279 "uuid": "0510a93c-e574-4bb7-9098-ac2e21a87f66", 00:09:49.279 "strip_size_kb": 64, 00:09:49.279 "state": "online", 00:09:49.279 "raid_level": "concat", 00:09:49.279 "superblock": false, 00:09:49.279 "num_base_bdevs": 4, 00:09:49.279 "num_base_bdevs_discovered": 4, 00:09:49.279 "num_base_bdevs_operational": 4, 00:09:49.279 "base_bdevs_list": [ 00:09:49.279 { 00:09:49.279 "name": "BaseBdev1", 00:09:49.279 "uuid": "8ae0db77-91ff-4473-932a-78e4ce9ff69c", 00:09:49.279 "is_configured": true, 00:09:49.279 "data_offset": 0, 00:09:49.279 "data_size": 65536 00:09:49.279 }, 00:09:49.279 { 00:09:49.279 "name": "BaseBdev2", 00:09:49.279 "uuid": "ba68812a-559c-4424-9435-40dc2121f1f6", 00:09:49.279 "is_configured": true, 00:09:49.279 "data_offset": 0, 00:09:49.279 "data_size": 65536 00:09:49.279 }, 00:09:49.279 { 00:09:49.279 "name": "BaseBdev3", 00:09:49.279 "uuid": "952e95aa-f742-4527-a739-03c59c1dc9a7", 00:09:49.279 "is_configured": true, 00:09:49.279 "data_offset": 0, 00:09:49.279 "data_size": 65536 00:09:49.279 }, 00:09:49.279 { 00:09:49.279 "name": "BaseBdev4", 00:09:49.279 "uuid": "b4f2ec71-1591-4ceb-9f2e-e61200df2c39", 00:09:49.279 "is_configured": true, 00:09:49.279 "data_offset": 0, 00:09:49.279 "data_size": 65536 00:09:49.279 } 00:09:49.279 ] 00:09:49.279 }' 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.279 16:22:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.539 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:49.539 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:49.539 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:49.539 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:49.539 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:49.539 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:49.539 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:49.539 16:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.539 16:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.539 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:49.539 [2024-11-28 16:22:41.248754] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.539 16:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.539 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:49.539 "name": "Existed_Raid", 00:09:49.539 "aliases": [ 00:09:49.539 "0510a93c-e574-4bb7-9098-ac2e21a87f66" 00:09:49.539 ], 00:09:49.539 "product_name": "Raid Volume", 00:09:49.539 "block_size": 512, 00:09:49.539 "num_blocks": 262144, 00:09:49.539 "uuid": "0510a93c-e574-4bb7-9098-ac2e21a87f66", 00:09:49.539 "assigned_rate_limits": { 00:09:49.539 "rw_ios_per_sec": 0, 00:09:49.539 "rw_mbytes_per_sec": 0, 00:09:49.539 "r_mbytes_per_sec": 0, 00:09:49.539 "w_mbytes_per_sec": 0 00:09:49.539 }, 00:09:49.539 "claimed": false, 00:09:49.539 "zoned": false, 00:09:49.539 "supported_io_types": { 00:09:49.539 "read": true, 00:09:49.539 "write": true, 00:09:49.539 "unmap": true, 00:09:49.539 "flush": true, 00:09:49.539 "reset": true, 00:09:49.539 "nvme_admin": false, 00:09:49.539 "nvme_io": false, 00:09:49.539 "nvme_io_md": false, 00:09:49.539 "write_zeroes": true, 00:09:49.539 "zcopy": false, 00:09:49.539 "get_zone_info": false, 00:09:49.539 "zone_management": false, 00:09:49.539 "zone_append": false, 00:09:49.539 "compare": false, 00:09:49.539 "compare_and_write": false, 00:09:49.539 "abort": false, 00:09:49.539 "seek_hole": false, 00:09:49.539 "seek_data": false, 00:09:49.539 "copy": false, 00:09:49.539 "nvme_iov_md": false 00:09:49.539 }, 00:09:49.539 "memory_domains": [ 00:09:49.539 { 00:09:49.539 "dma_device_id": "system", 00:09:49.539 "dma_device_type": 1 00:09:49.539 }, 00:09:49.539 { 00:09:49.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.539 "dma_device_type": 2 00:09:49.539 }, 00:09:49.539 { 00:09:49.539 "dma_device_id": "system", 00:09:49.539 "dma_device_type": 1 00:09:49.539 }, 00:09:49.539 { 00:09:49.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.539 "dma_device_type": 2 00:09:49.539 }, 00:09:49.539 { 00:09:49.539 "dma_device_id": "system", 00:09:49.539 "dma_device_type": 1 00:09:49.539 }, 00:09:49.539 { 00:09:49.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.539 "dma_device_type": 2 00:09:49.539 }, 00:09:49.539 { 00:09:49.539 "dma_device_id": "system", 00:09:49.539 "dma_device_type": 1 00:09:49.539 }, 00:09:49.539 { 00:09:49.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.539 "dma_device_type": 2 00:09:49.539 } 00:09:49.539 ], 00:09:49.539 "driver_specific": { 00:09:49.539 "raid": { 00:09:49.539 "uuid": "0510a93c-e574-4bb7-9098-ac2e21a87f66", 00:09:49.539 "strip_size_kb": 64, 00:09:49.539 "state": "online", 00:09:49.539 "raid_level": "concat", 00:09:49.539 "superblock": false, 00:09:49.540 "num_base_bdevs": 4, 00:09:49.540 "num_base_bdevs_discovered": 4, 00:09:49.540 "num_base_bdevs_operational": 4, 00:09:49.540 "base_bdevs_list": [ 00:09:49.540 { 00:09:49.540 "name": "BaseBdev1", 00:09:49.540 "uuid": "8ae0db77-91ff-4473-932a-78e4ce9ff69c", 00:09:49.540 "is_configured": true, 00:09:49.540 "data_offset": 0, 00:09:49.540 "data_size": 65536 00:09:49.540 }, 00:09:49.540 { 00:09:49.540 "name": "BaseBdev2", 00:09:49.540 "uuid": "ba68812a-559c-4424-9435-40dc2121f1f6", 00:09:49.540 "is_configured": true, 00:09:49.540 "data_offset": 0, 00:09:49.540 "data_size": 65536 00:09:49.540 }, 00:09:49.540 { 00:09:49.540 "name": "BaseBdev3", 00:09:49.540 "uuid": "952e95aa-f742-4527-a739-03c59c1dc9a7", 00:09:49.540 "is_configured": true, 00:09:49.540 "data_offset": 0, 00:09:49.540 "data_size": 65536 00:09:49.540 }, 00:09:49.540 { 00:09:49.540 "name": "BaseBdev4", 00:09:49.540 "uuid": "b4f2ec71-1591-4ceb-9f2e-e61200df2c39", 00:09:49.540 "is_configured": true, 00:09:49.540 "data_offset": 0, 00:09:49.540 "data_size": 65536 00:09:49.540 } 00:09:49.540 ] 00:09:49.540 } 00:09:49.540 } 00:09:49.540 }' 00:09:49.540 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:49.799 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:49.799 BaseBdev2 00:09:49.799 BaseBdev3 00:09:49.799 BaseBdev4' 00:09:49.799 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.799 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:49.799 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.799 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:49.799 16:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.799 16:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.799 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.799 16:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.799 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.799 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.799 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.799 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:49.800 16:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.800 16:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.800 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.800 16:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.800 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.800 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.800 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.800 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.800 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:49.800 16:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.800 16:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.800 16:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.800 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.800 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.800 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.800 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.800 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:49.800 16:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.800 16:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.059 16:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.059 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.059 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.059 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:50.059 16:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.059 16:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.059 [2024-11-28 16:22:41.583908] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:50.059 [2024-11-28 16:22:41.583944] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.059 [2024-11-28 16:22:41.584004] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.059 16:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.059 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:50.059 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:50.059 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:50.059 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:50.059 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:50.059 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:50.059 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.059 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:50.059 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.059 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.059 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.059 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.060 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.060 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.060 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.060 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.060 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.060 16:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.060 16:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.060 16:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.060 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.060 "name": "Existed_Raid", 00:09:50.060 "uuid": "0510a93c-e574-4bb7-9098-ac2e21a87f66", 00:09:50.060 "strip_size_kb": 64, 00:09:50.060 "state": "offline", 00:09:50.060 "raid_level": "concat", 00:09:50.060 "superblock": false, 00:09:50.060 "num_base_bdevs": 4, 00:09:50.060 "num_base_bdevs_discovered": 3, 00:09:50.060 "num_base_bdevs_operational": 3, 00:09:50.060 "base_bdevs_list": [ 00:09:50.060 { 00:09:50.060 "name": null, 00:09:50.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.060 "is_configured": false, 00:09:50.060 "data_offset": 0, 00:09:50.060 "data_size": 65536 00:09:50.060 }, 00:09:50.060 { 00:09:50.060 "name": "BaseBdev2", 00:09:50.060 "uuid": "ba68812a-559c-4424-9435-40dc2121f1f6", 00:09:50.060 "is_configured": true, 00:09:50.060 "data_offset": 0, 00:09:50.060 "data_size": 65536 00:09:50.060 }, 00:09:50.060 { 00:09:50.060 "name": "BaseBdev3", 00:09:50.060 "uuid": "952e95aa-f742-4527-a739-03c59c1dc9a7", 00:09:50.060 "is_configured": true, 00:09:50.060 "data_offset": 0, 00:09:50.060 "data_size": 65536 00:09:50.060 }, 00:09:50.060 { 00:09:50.060 "name": "BaseBdev4", 00:09:50.060 "uuid": "b4f2ec71-1591-4ceb-9f2e-e61200df2c39", 00:09:50.060 "is_configured": true, 00:09:50.060 "data_offset": 0, 00:09:50.060 "data_size": 65536 00:09:50.060 } 00:09:50.060 ] 00:09:50.060 }' 00:09:50.060 16:22:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.060 16:22:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.319 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:50.319 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:50.319 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.319 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:50.319 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.319 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.319 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.580 [2024-11-28 16:22:42.110180] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.580 [2024-11-28 16:22:42.161360] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.580 [2024-11-28 16:22:42.227996] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:50.580 [2024-11-28 16:22:42.228039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.580 BaseBdev2 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.580 [ 00:09:50.580 { 00:09:50.580 "name": "BaseBdev2", 00:09:50.580 "aliases": [ 00:09:50.580 "68d53d19-d19d-4e20-b9c0-f6dffe4e6b95" 00:09:50.580 ], 00:09:50.580 "product_name": "Malloc disk", 00:09:50.580 "block_size": 512, 00:09:50.580 "num_blocks": 65536, 00:09:50.580 "uuid": "68d53d19-d19d-4e20-b9c0-f6dffe4e6b95", 00:09:50.580 "assigned_rate_limits": { 00:09:50.580 "rw_ios_per_sec": 0, 00:09:50.580 "rw_mbytes_per_sec": 0, 00:09:50.580 "r_mbytes_per_sec": 0, 00:09:50.580 "w_mbytes_per_sec": 0 00:09:50.580 }, 00:09:50.580 "claimed": false, 00:09:50.580 "zoned": false, 00:09:50.580 "supported_io_types": { 00:09:50.580 "read": true, 00:09:50.580 "write": true, 00:09:50.580 "unmap": true, 00:09:50.580 "flush": true, 00:09:50.580 "reset": true, 00:09:50.580 "nvme_admin": false, 00:09:50.580 "nvme_io": false, 00:09:50.580 "nvme_io_md": false, 00:09:50.580 "write_zeroes": true, 00:09:50.580 "zcopy": true, 00:09:50.580 "get_zone_info": false, 00:09:50.580 "zone_management": false, 00:09:50.580 "zone_append": false, 00:09:50.580 "compare": false, 00:09:50.580 "compare_and_write": false, 00:09:50.580 "abort": true, 00:09:50.580 "seek_hole": false, 00:09:50.580 "seek_data": false, 00:09:50.580 "copy": true, 00:09:50.580 "nvme_iov_md": false 00:09:50.580 }, 00:09:50.580 "memory_domains": [ 00:09:50.580 { 00:09:50.580 "dma_device_id": "system", 00:09:50.580 "dma_device_type": 1 00:09:50.580 }, 00:09:50.580 { 00:09:50.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.580 "dma_device_type": 2 00:09:50.580 } 00:09:50.580 ], 00:09:50.580 "driver_specific": {} 00:09:50.580 } 00:09:50.580 ] 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.580 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.840 BaseBdev3 00:09:50.840 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.840 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.841 [ 00:09:50.841 { 00:09:50.841 "name": "BaseBdev3", 00:09:50.841 "aliases": [ 00:09:50.841 "a256276f-eb5b-45c6-8c66-8dd86204ec38" 00:09:50.841 ], 00:09:50.841 "product_name": "Malloc disk", 00:09:50.841 "block_size": 512, 00:09:50.841 "num_blocks": 65536, 00:09:50.841 "uuid": "a256276f-eb5b-45c6-8c66-8dd86204ec38", 00:09:50.841 "assigned_rate_limits": { 00:09:50.841 "rw_ios_per_sec": 0, 00:09:50.841 "rw_mbytes_per_sec": 0, 00:09:50.841 "r_mbytes_per_sec": 0, 00:09:50.841 "w_mbytes_per_sec": 0 00:09:50.841 }, 00:09:50.841 "claimed": false, 00:09:50.841 "zoned": false, 00:09:50.841 "supported_io_types": { 00:09:50.841 "read": true, 00:09:50.841 "write": true, 00:09:50.841 "unmap": true, 00:09:50.841 "flush": true, 00:09:50.841 "reset": true, 00:09:50.841 "nvme_admin": false, 00:09:50.841 "nvme_io": false, 00:09:50.841 "nvme_io_md": false, 00:09:50.841 "write_zeroes": true, 00:09:50.841 "zcopy": true, 00:09:50.841 "get_zone_info": false, 00:09:50.841 "zone_management": false, 00:09:50.841 "zone_append": false, 00:09:50.841 "compare": false, 00:09:50.841 "compare_and_write": false, 00:09:50.841 "abort": true, 00:09:50.841 "seek_hole": false, 00:09:50.841 "seek_data": false, 00:09:50.841 "copy": true, 00:09:50.841 "nvme_iov_md": false 00:09:50.841 }, 00:09:50.841 "memory_domains": [ 00:09:50.841 { 00:09:50.841 "dma_device_id": "system", 00:09:50.841 "dma_device_type": 1 00:09:50.841 }, 00:09:50.841 { 00:09:50.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.841 "dma_device_type": 2 00:09:50.841 } 00:09:50.841 ], 00:09:50.841 "driver_specific": {} 00:09:50.841 } 00:09:50.841 ] 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.841 BaseBdev4 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.841 [ 00:09:50.841 { 00:09:50.841 "name": "BaseBdev4", 00:09:50.841 "aliases": [ 00:09:50.841 "8dd81be3-56ad-45d1-acbb-bd68dcb92841" 00:09:50.841 ], 00:09:50.841 "product_name": "Malloc disk", 00:09:50.841 "block_size": 512, 00:09:50.841 "num_blocks": 65536, 00:09:50.841 "uuid": "8dd81be3-56ad-45d1-acbb-bd68dcb92841", 00:09:50.841 "assigned_rate_limits": { 00:09:50.841 "rw_ios_per_sec": 0, 00:09:50.841 "rw_mbytes_per_sec": 0, 00:09:50.841 "r_mbytes_per_sec": 0, 00:09:50.841 "w_mbytes_per_sec": 0 00:09:50.841 }, 00:09:50.841 "claimed": false, 00:09:50.841 "zoned": false, 00:09:50.841 "supported_io_types": { 00:09:50.841 "read": true, 00:09:50.841 "write": true, 00:09:50.841 "unmap": true, 00:09:50.841 "flush": true, 00:09:50.841 "reset": true, 00:09:50.841 "nvme_admin": false, 00:09:50.841 "nvme_io": false, 00:09:50.841 "nvme_io_md": false, 00:09:50.841 "write_zeroes": true, 00:09:50.841 "zcopy": true, 00:09:50.841 "get_zone_info": false, 00:09:50.841 "zone_management": false, 00:09:50.841 "zone_append": false, 00:09:50.841 "compare": false, 00:09:50.841 "compare_and_write": false, 00:09:50.841 "abort": true, 00:09:50.841 "seek_hole": false, 00:09:50.841 "seek_data": false, 00:09:50.841 "copy": true, 00:09:50.841 "nvme_iov_md": false 00:09:50.841 }, 00:09:50.841 "memory_domains": [ 00:09:50.841 { 00:09:50.841 "dma_device_id": "system", 00:09:50.841 "dma_device_type": 1 00:09:50.841 }, 00:09:50.841 { 00:09:50.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.841 "dma_device_type": 2 00:09:50.841 } 00:09:50.841 ], 00:09:50.841 "driver_specific": {} 00:09:50.841 } 00:09:50.841 ] 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.841 [2024-11-28 16:22:42.454378] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:50.841 [2024-11-28 16:22:42.454468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:50.841 [2024-11-28 16:22:42.454525] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.841 [2024-11-28 16:22:42.456312] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:50.841 [2024-11-28 16:22:42.456400] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.841 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.841 "name": "Existed_Raid", 00:09:50.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.841 "strip_size_kb": 64, 00:09:50.841 "state": "configuring", 00:09:50.841 "raid_level": "concat", 00:09:50.841 "superblock": false, 00:09:50.841 "num_base_bdevs": 4, 00:09:50.841 "num_base_bdevs_discovered": 3, 00:09:50.841 "num_base_bdevs_operational": 4, 00:09:50.841 "base_bdevs_list": [ 00:09:50.841 { 00:09:50.841 "name": "BaseBdev1", 00:09:50.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.841 "is_configured": false, 00:09:50.841 "data_offset": 0, 00:09:50.841 "data_size": 0 00:09:50.841 }, 00:09:50.841 { 00:09:50.842 "name": "BaseBdev2", 00:09:50.842 "uuid": "68d53d19-d19d-4e20-b9c0-f6dffe4e6b95", 00:09:50.842 "is_configured": true, 00:09:50.842 "data_offset": 0, 00:09:50.842 "data_size": 65536 00:09:50.842 }, 00:09:50.842 { 00:09:50.842 "name": "BaseBdev3", 00:09:50.842 "uuid": "a256276f-eb5b-45c6-8c66-8dd86204ec38", 00:09:50.842 "is_configured": true, 00:09:50.842 "data_offset": 0, 00:09:50.842 "data_size": 65536 00:09:50.842 }, 00:09:50.842 { 00:09:50.842 "name": "BaseBdev4", 00:09:50.842 "uuid": "8dd81be3-56ad-45d1-acbb-bd68dcb92841", 00:09:50.842 "is_configured": true, 00:09:50.842 "data_offset": 0, 00:09:50.842 "data_size": 65536 00:09:50.842 } 00:09:50.842 ] 00:09:50.842 }' 00:09:50.842 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.842 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.412 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:51.412 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.412 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.412 [2024-11-28 16:22:42.877647] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:51.412 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.412 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:51.412 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.412 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.412 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.412 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.412 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.412 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.412 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.412 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.412 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.412 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.412 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.412 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.412 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.412 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.412 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.412 "name": "Existed_Raid", 00:09:51.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.412 "strip_size_kb": 64, 00:09:51.412 "state": "configuring", 00:09:51.412 "raid_level": "concat", 00:09:51.412 "superblock": false, 00:09:51.412 "num_base_bdevs": 4, 00:09:51.412 "num_base_bdevs_discovered": 2, 00:09:51.412 "num_base_bdevs_operational": 4, 00:09:51.412 "base_bdevs_list": [ 00:09:51.412 { 00:09:51.412 "name": "BaseBdev1", 00:09:51.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.412 "is_configured": false, 00:09:51.412 "data_offset": 0, 00:09:51.412 "data_size": 0 00:09:51.412 }, 00:09:51.412 { 00:09:51.412 "name": null, 00:09:51.412 "uuid": "68d53d19-d19d-4e20-b9c0-f6dffe4e6b95", 00:09:51.412 "is_configured": false, 00:09:51.412 "data_offset": 0, 00:09:51.412 "data_size": 65536 00:09:51.412 }, 00:09:51.412 { 00:09:51.412 "name": "BaseBdev3", 00:09:51.412 "uuid": "a256276f-eb5b-45c6-8c66-8dd86204ec38", 00:09:51.412 "is_configured": true, 00:09:51.412 "data_offset": 0, 00:09:51.412 "data_size": 65536 00:09:51.412 }, 00:09:51.412 { 00:09:51.412 "name": "BaseBdev4", 00:09:51.412 "uuid": "8dd81be3-56ad-45d1-acbb-bd68dcb92841", 00:09:51.412 "is_configured": true, 00:09:51.412 "data_offset": 0, 00:09:51.412 "data_size": 65536 00:09:51.412 } 00:09:51.412 ] 00:09:51.412 }' 00:09:51.412 16:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.412 16:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.672 [2024-11-28 16:22:43.375604] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.672 BaseBdev1 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.672 [ 00:09:51.672 { 00:09:51.672 "name": "BaseBdev1", 00:09:51.672 "aliases": [ 00:09:51.672 "897d2946-baf5-4119-a957-9d7b2ee09c7b" 00:09:51.672 ], 00:09:51.672 "product_name": "Malloc disk", 00:09:51.672 "block_size": 512, 00:09:51.672 "num_blocks": 65536, 00:09:51.672 "uuid": "897d2946-baf5-4119-a957-9d7b2ee09c7b", 00:09:51.672 "assigned_rate_limits": { 00:09:51.672 "rw_ios_per_sec": 0, 00:09:51.672 "rw_mbytes_per_sec": 0, 00:09:51.672 "r_mbytes_per_sec": 0, 00:09:51.672 "w_mbytes_per_sec": 0 00:09:51.672 }, 00:09:51.672 "claimed": true, 00:09:51.672 "claim_type": "exclusive_write", 00:09:51.672 "zoned": false, 00:09:51.672 "supported_io_types": { 00:09:51.672 "read": true, 00:09:51.672 "write": true, 00:09:51.672 "unmap": true, 00:09:51.672 "flush": true, 00:09:51.672 "reset": true, 00:09:51.672 "nvme_admin": false, 00:09:51.672 "nvme_io": false, 00:09:51.672 "nvme_io_md": false, 00:09:51.672 "write_zeroes": true, 00:09:51.672 "zcopy": true, 00:09:51.672 "get_zone_info": false, 00:09:51.672 "zone_management": false, 00:09:51.672 "zone_append": false, 00:09:51.672 "compare": false, 00:09:51.672 "compare_and_write": false, 00:09:51.672 "abort": true, 00:09:51.672 "seek_hole": false, 00:09:51.672 "seek_data": false, 00:09:51.672 "copy": true, 00:09:51.672 "nvme_iov_md": false 00:09:51.672 }, 00:09:51.672 "memory_domains": [ 00:09:51.672 { 00:09:51.672 "dma_device_id": "system", 00:09:51.672 "dma_device_type": 1 00:09:51.672 }, 00:09:51.672 { 00:09:51.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.672 "dma_device_type": 2 00:09:51.672 } 00:09:51.672 ], 00:09:51.672 "driver_specific": {} 00:09:51.672 } 00:09:51.672 ] 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.672 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.932 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.932 "name": "Existed_Raid", 00:09:51.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.932 "strip_size_kb": 64, 00:09:51.932 "state": "configuring", 00:09:51.932 "raid_level": "concat", 00:09:51.932 "superblock": false, 00:09:51.932 "num_base_bdevs": 4, 00:09:51.932 "num_base_bdevs_discovered": 3, 00:09:51.932 "num_base_bdevs_operational": 4, 00:09:51.932 "base_bdevs_list": [ 00:09:51.932 { 00:09:51.932 "name": "BaseBdev1", 00:09:51.932 "uuid": "897d2946-baf5-4119-a957-9d7b2ee09c7b", 00:09:51.932 "is_configured": true, 00:09:51.932 "data_offset": 0, 00:09:51.932 "data_size": 65536 00:09:51.932 }, 00:09:51.932 { 00:09:51.932 "name": null, 00:09:51.932 "uuid": "68d53d19-d19d-4e20-b9c0-f6dffe4e6b95", 00:09:51.932 "is_configured": false, 00:09:51.932 "data_offset": 0, 00:09:51.932 "data_size": 65536 00:09:51.932 }, 00:09:51.932 { 00:09:51.932 "name": "BaseBdev3", 00:09:51.932 "uuid": "a256276f-eb5b-45c6-8c66-8dd86204ec38", 00:09:51.932 "is_configured": true, 00:09:51.932 "data_offset": 0, 00:09:51.932 "data_size": 65536 00:09:51.932 }, 00:09:51.932 { 00:09:51.932 "name": "BaseBdev4", 00:09:51.932 "uuid": "8dd81be3-56ad-45d1-acbb-bd68dcb92841", 00:09:51.932 "is_configured": true, 00:09:51.932 "data_offset": 0, 00:09:51.932 "data_size": 65536 00:09:51.932 } 00:09:51.932 ] 00:09:51.932 }' 00:09:51.932 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.932 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.192 [2024-11-28 16:22:43.886776] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.192 "name": "Existed_Raid", 00:09:52.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.192 "strip_size_kb": 64, 00:09:52.192 "state": "configuring", 00:09:52.192 "raid_level": "concat", 00:09:52.192 "superblock": false, 00:09:52.192 "num_base_bdevs": 4, 00:09:52.192 "num_base_bdevs_discovered": 2, 00:09:52.192 "num_base_bdevs_operational": 4, 00:09:52.192 "base_bdevs_list": [ 00:09:52.192 { 00:09:52.192 "name": "BaseBdev1", 00:09:52.192 "uuid": "897d2946-baf5-4119-a957-9d7b2ee09c7b", 00:09:52.192 "is_configured": true, 00:09:52.192 "data_offset": 0, 00:09:52.192 "data_size": 65536 00:09:52.192 }, 00:09:52.192 { 00:09:52.192 "name": null, 00:09:52.192 "uuid": "68d53d19-d19d-4e20-b9c0-f6dffe4e6b95", 00:09:52.192 "is_configured": false, 00:09:52.192 "data_offset": 0, 00:09:52.192 "data_size": 65536 00:09:52.192 }, 00:09:52.192 { 00:09:52.192 "name": null, 00:09:52.192 "uuid": "a256276f-eb5b-45c6-8c66-8dd86204ec38", 00:09:52.192 "is_configured": false, 00:09:52.192 "data_offset": 0, 00:09:52.192 "data_size": 65536 00:09:52.192 }, 00:09:52.192 { 00:09:52.192 "name": "BaseBdev4", 00:09:52.192 "uuid": "8dd81be3-56ad-45d1-acbb-bd68dcb92841", 00:09:52.192 "is_configured": true, 00:09:52.192 "data_offset": 0, 00:09:52.192 "data_size": 65536 00:09:52.192 } 00:09:52.192 ] 00:09:52.192 }' 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.192 16:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.761 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.762 [2024-11-28 16:22:44.373984] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.762 "name": "Existed_Raid", 00:09:52.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.762 "strip_size_kb": 64, 00:09:52.762 "state": "configuring", 00:09:52.762 "raid_level": "concat", 00:09:52.762 "superblock": false, 00:09:52.762 "num_base_bdevs": 4, 00:09:52.762 "num_base_bdevs_discovered": 3, 00:09:52.762 "num_base_bdevs_operational": 4, 00:09:52.762 "base_bdevs_list": [ 00:09:52.762 { 00:09:52.762 "name": "BaseBdev1", 00:09:52.762 "uuid": "897d2946-baf5-4119-a957-9d7b2ee09c7b", 00:09:52.762 "is_configured": true, 00:09:52.762 "data_offset": 0, 00:09:52.762 "data_size": 65536 00:09:52.762 }, 00:09:52.762 { 00:09:52.762 "name": null, 00:09:52.762 "uuid": "68d53d19-d19d-4e20-b9c0-f6dffe4e6b95", 00:09:52.762 "is_configured": false, 00:09:52.762 "data_offset": 0, 00:09:52.762 "data_size": 65536 00:09:52.762 }, 00:09:52.762 { 00:09:52.762 "name": "BaseBdev3", 00:09:52.762 "uuid": "a256276f-eb5b-45c6-8c66-8dd86204ec38", 00:09:52.762 "is_configured": true, 00:09:52.762 "data_offset": 0, 00:09:52.762 "data_size": 65536 00:09:52.762 }, 00:09:52.762 { 00:09:52.762 "name": "BaseBdev4", 00:09:52.762 "uuid": "8dd81be3-56ad-45d1-acbb-bd68dcb92841", 00:09:52.762 "is_configured": true, 00:09:52.762 "data_offset": 0, 00:09:52.762 "data_size": 65536 00:09:52.762 } 00:09:52.762 ] 00:09:52.762 }' 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.762 16:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.331 [2024-11-28 16:22:44.857167] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.331 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.331 "name": "Existed_Raid", 00:09:53.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.331 "strip_size_kb": 64, 00:09:53.331 "state": "configuring", 00:09:53.331 "raid_level": "concat", 00:09:53.331 "superblock": false, 00:09:53.331 "num_base_bdevs": 4, 00:09:53.331 "num_base_bdevs_discovered": 2, 00:09:53.331 "num_base_bdevs_operational": 4, 00:09:53.331 "base_bdevs_list": [ 00:09:53.331 { 00:09:53.331 "name": null, 00:09:53.331 "uuid": "897d2946-baf5-4119-a957-9d7b2ee09c7b", 00:09:53.331 "is_configured": false, 00:09:53.331 "data_offset": 0, 00:09:53.331 "data_size": 65536 00:09:53.331 }, 00:09:53.331 { 00:09:53.331 "name": null, 00:09:53.331 "uuid": "68d53d19-d19d-4e20-b9c0-f6dffe4e6b95", 00:09:53.331 "is_configured": false, 00:09:53.331 "data_offset": 0, 00:09:53.331 "data_size": 65536 00:09:53.332 }, 00:09:53.332 { 00:09:53.332 "name": "BaseBdev3", 00:09:53.332 "uuid": "a256276f-eb5b-45c6-8c66-8dd86204ec38", 00:09:53.332 "is_configured": true, 00:09:53.332 "data_offset": 0, 00:09:53.332 "data_size": 65536 00:09:53.332 }, 00:09:53.332 { 00:09:53.332 "name": "BaseBdev4", 00:09:53.332 "uuid": "8dd81be3-56ad-45d1-acbb-bd68dcb92841", 00:09:53.332 "is_configured": true, 00:09:53.332 "data_offset": 0, 00:09:53.332 "data_size": 65536 00:09:53.332 } 00:09:53.332 ] 00:09:53.332 }' 00:09:53.332 16:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.332 16:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.591 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.591 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:53.591 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.591 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.591 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.591 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:53.591 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:53.591 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.591 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.851 [2024-11-28 16:22:45.362513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.851 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.851 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:53.851 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.851 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.851 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.851 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.851 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.851 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.851 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.851 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.851 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.851 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.851 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.851 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.851 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.851 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.851 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.851 "name": "Existed_Raid", 00:09:53.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.851 "strip_size_kb": 64, 00:09:53.851 "state": "configuring", 00:09:53.851 "raid_level": "concat", 00:09:53.851 "superblock": false, 00:09:53.851 "num_base_bdevs": 4, 00:09:53.851 "num_base_bdevs_discovered": 3, 00:09:53.851 "num_base_bdevs_operational": 4, 00:09:53.851 "base_bdevs_list": [ 00:09:53.851 { 00:09:53.851 "name": null, 00:09:53.851 "uuid": "897d2946-baf5-4119-a957-9d7b2ee09c7b", 00:09:53.851 "is_configured": false, 00:09:53.851 "data_offset": 0, 00:09:53.851 "data_size": 65536 00:09:53.851 }, 00:09:53.851 { 00:09:53.851 "name": "BaseBdev2", 00:09:53.851 "uuid": "68d53d19-d19d-4e20-b9c0-f6dffe4e6b95", 00:09:53.851 "is_configured": true, 00:09:53.851 "data_offset": 0, 00:09:53.851 "data_size": 65536 00:09:53.851 }, 00:09:53.851 { 00:09:53.851 "name": "BaseBdev3", 00:09:53.851 "uuid": "a256276f-eb5b-45c6-8c66-8dd86204ec38", 00:09:53.851 "is_configured": true, 00:09:53.851 "data_offset": 0, 00:09:53.851 "data_size": 65536 00:09:53.851 }, 00:09:53.851 { 00:09:53.851 "name": "BaseBdev4", 00:09:53.851 "uuid": "8dd81be3-56ad-45d1-acbb-bd68dcb92841", 00:09:53.851 "is_configured": true, 00:09:53.851 "data_offset": 0, 00:09:53.851 "data_size": 65536 00:09:53.851 } 00:09:53.851 ] 00:09:53.851 }' 00:09:53.851 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.851 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 897d2946-baf5-4119-a957-9d7b2ee09c7b 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.112 [2024-11-28 16:22:45.812483] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:54.112 [2024-11-28 16:22:45.812527] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:54.112 [2024-11-28 16:22:45.812534] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:54.112 [2024-11-28 16:22:45.812776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:54.112 [2024-11-28 16:22:45.812907] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:54.112 [2024-11-28 16:22:45.812921] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:54.112 [2024-11-28 16:22:45.813089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.112 NewBaseBdev 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.112 [ 00:09:54.112 { 00:09:54.112 "name": "NewBaseBdev", 00:09:54.112 "aliases": [ 00:09:54.112 "897d2946-baf5-4119-a957-9d7b2ee09c7b" 00:09:54.112 ], 00:09:54.112 "product_name": "Malloc disk", 00:09:54.112 "block_size": 512, 00:09:54.112 "num_blocks": 65536, 00:09:54.112 "uuid": "897d2946-baf5-4119-a957-9d7b2ee09c7b", 00:09:54.112 "assigned_rate_limits": { 00:09:54.112 "rw_ios_per_sec": 0, 00:09:54.112 "rw_mbytes_per_sec": 0, 00:09:54.112 "r_mbytes_per_sec": 0, 00:09:54.112 "w_mbytes_per_sec": 0 00:09:54.112 }, 00:09:54.112 "claimed": true, 00:09:54.112 "claim_type": "exclusive_write", 00:09:54.112 "zoned": false, 00:09:54.112 "supported_io_types": { 00:09:54.112 "read": true, 00:09:54.112 "write": true, 00:09:54.112 "unmap": true, 00:09:54.112 "flush": true, 00:09:54.112 "reset": true, 00:09:54.112 "nvme_admin": false, 00:09:54.112 "nvme_io": false, 00:09:54.112 "nvme_io_md": false, 00:09:54.112 "write_zeroes": true, 00:09:54.112 "zcopy": true, 00:09:54.112 "get_zone_info": false, 00:09:54.112 "zone_management": false, 00:09:54.112 "zone_append": false, 00:09:54.112 "compare": false, 00:09:54.112 "compare_and_write": false, 00:09:54.112 "abort": true, 00:09:54.112 "seek_hole": false, 00:09:54.112 "seek_data": false, 00:09:54.112 "copy": true, 00:09:54.112 "nvme_iov_md": false 00:09:54.112 }, 00:09:54.112 "memory_domains": [ 00:09:54.112 { 00:09:54.112 "dma_device_id": "system", 00:09:54.112 "dma_device_type": 1 00:09:54.112 }, 00:09:54.112 { 00:09:54.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.112 "dma_device_type": 2 00:09:54.112 } 00:09:54.112 ], 00:09:54.112 "driver_specific": {} 00:09:54.112 } 00:09:54.112 ] 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.112 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.371 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.371 "name": "Existed_Raid", 00:09:54.371 "uuid": "94f6d6f7-cf6e-4e75-b217-d5e338ac25c7", 00:09:54.371 "strip_size_kb": 64, 00:09:54.371 "state": "online", 00:09:54.371 "raid_level": "concat", 00:09:54.371 "superblock": false, 00:09:54.371 "num_base_bdevs": 4, 00:09:54.371 "num_base_bdevs_discovered": 4, 00:09:54.371 "num_base_bdevs_operational": 4, 00:09:54.371 "base_bdevs_list": [ 00:09:54.371 { 00:09:54.371 "name": "NewBaseBdev", 00:09:54.371 "uuid": "897d2946-baf5-4119-a957-9d7b2ee09c7b", 00:09:54.371 "is_configured": true, 00:09:54.371 "data_offset": 0, 00:09:54.371 "data_size": 65536 00:09:54.371 }, 00:09:54.371 { 00:09:54.371 "name": "BaseBdev2", 00:09:54.371 "uuid": "68d53d19-d19d-4e20-b9c0-f6dffe4e6b95", 00:09:54.371 "is_configured": true, 00:09:54.371 "data_offset": 0, 00:09:54.371 "data_size": 65536 00:09:54.371 }, 00:09:54.371 { 00:09:54.371 "name": "BaseBdev3", 00:09:54.371 "uuid": "a256276f-eb5b-45c6-8c66-8dd86204ec38", 00:09:54.371 "is_configured": true, 00:09:54.371 "data_offset": 0, 00:09:54.371 "data_size": 65536 00:09:54.371 }, 00:09:54.372 { 00:09:54.372 "name": "BaseBdev4", 00:09:54.372 "uuid": "8dd81be3-56ad-45d1-acbb-bd68dcb92841", 00:09:54.372 "is_configured": true, 00:09:54.372 "data_offset": 0, 00:09:54.372 "data_size": 65536 00:09:54.372 } 00:09:54.372 ] 00:09:54.372 }' 00:09:54.372 16:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.372 16:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.631 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:54.631 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:54.631 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:54.631 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:54.631 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:54.631 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:54.631 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:54.631 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.631 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.631 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:54.631 [2024-11-28 16:22:46.296068] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:54.631 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.631 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:54.631 "name": "Existed_Raid", 00:09:54.631 "aliases": [ 00:09:54.631 "94f6d6f7-cf6e-4e75-b217-d5e338ac25c7" 00:09:54.631 ], 00:09:54.631 "product_name": "Raid Volume", 00:09:54.631 "block_size": 512, 00:09:54.631 "num_blocks": 262144, 00:09:54.631 "uuid": "94f6d6f7-cf6e-4e75-b217-d5e338ac25c7", 00:09:54.631 "assigned_rate_limits": { 00:09:54.631 "rw_ios_per_sec": 0, 00:09:54.631 "rw_mbytes_per_sec": 0, 00:09:54.631 "r_mbytes_per_sec": 0, 00:09:54.631 "w_mbytes_per_sec": 0 00:09:54.631 }, 00:09:54.631 "claimed": false, 00:09:54.631 "zoned": false, 00:09:54.631 "supported_io_types": { 00:09:54.631 "read": true, 00:09:54.631 "write": true, 00:09:54.631 "unmap": true, 00:09:54.631 "flush": true, 00:09:54.631 "reset": true, 00:09:54.631 "nvme_admin": false, 00:09:54.631 "nvme_io": false, 00:09:54.631 "nvme_io_md": false, 00:09:54.631 "write_zeroes": true, 00:09:54.631 "zcopy": false, 00:09:54.631 "get_zone_info": false, 00:09:54.631 "zone_management": false, 00:09:54.631 "zone_append": false, 00:09:54.631 "compare": false, 00:09:54.631 "compare_and_write": false, 00:09:54.631 "abort": false, 00:09:54.631 "seek_hole": false, 00:09:54.631 "seek_data": false, 00:09:54.631 "copy": false, 00:09:54.631 "nvme_iov_md": false 00:09:54.631 }, 00:09:54.631 "memory_domains": [ 00:09:54.631 { 00:09:54.631 "dma_device_id": "system", 00:09:54.631 "dma_device_type": 1 00:09:54.631 }, 00:09:54.631 { 00:09:54.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.631 "dma_device_type": 2 00:09:54.631 }, 00:09:54.631 { 00:09:54.631 "dma_device_id": "system", 00:09:54.631 "dma_device_type": 1 00:09:54.631 }, 00:09:54.631 { 00:09:54.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.631 "dma_device_type": 2 00:09:54.631 }, 00:09:54.631 { 00:09:54.631 "dma_device_id": "system", 00:09:54.631 "dma_device_type": 1 00:09:54.631 }, 00:09:54.631 { 00:09:54.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.631 "dma_device_type": 2 00:09:54.631 }, 00:09:54.631 { 00:09:54.631 "dma_device_id": "system", 00:09:54.631 "dma_device_type": 1 00:09:54.631 }, 00:09:54.631 { 00:09:54.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.631 "dma_device_type": 2 00:09:54.631 } 00:09:54.631 ], 00:09:54.631 "driver_specific": { 00:09:54.631 "raid": { 00:09:54.631 "uuid": "94f6d6f7-cf6e-4e75-b217-d5e338ac25c7", 00:09:54.631 "strip_size_kb": 64, 00:09:54.631 "state": "online", 00:09:54.631 "raid_level": "concat", 00:09:54.631 "superblock": false, 00:09:54.631 "num_base_bdevs": 4, 00:09:54.631 "num_base_bdevs_discovered": 4, 00:09:54.631 "num_base_bdevs_operational": 4, 00:09:54.631 "base_bdevs_list": [ 00:09:54.631 { 00:09:54.631 "name": "NewBaseBdev", 00:09:54.631 "uuid": "897d2946-baf5-4119-a957-9d7b2ee09c7b", 00:09:54.631 "is_configured": true, 00:09:54.631 "data_offset": 0, 00:09:54.631 "data_size": 65536 00:09:54.631 }, 00:09:54.631 { 00:09:54.631 "name": "BaseBdev2", 00:09:54.631 "uuid": "68d53d19-d19d-4e20-b9c0-f6dffe4e6b95", 00:09:54.631 "is_configured": true, 00:09:54.631 "data_offset": 0, 00:09:54.631 "data_size": 65536 00:09:54.631 }, 00:09:54.631 { 00:09:54.631 "name": "BaseBdev3", 00:09:54.631 "uuid": "a256276f-eb5b-45c6-8c66-8dd86204ec38", 00:09:54.631 "is_configured": true, 00:09:54.631 "data_offset": 0, 00:09:54.631 "data_size": 65536 00:09:54.631 }, 00:09:54.631 { 00:09:54.631 "name": "BaseBdev4", 00:09:54.631 "uuid": "8dd81be3-56ad-45d1-acbb-bd68dcb92841", 00:09:54.631 "is_configured": true, 00:09:54.631 "data_offset": 0, 00:09:54.631 "data_size": 65536 00:09:54.631 } 00:09:54.631 ] 00:09:54.631 } 00:09:54.631 } 00:09:54.631 }' 00:09:54.631 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:54.631 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:54.631 BaseBdev2 00:09:54.631 BaseBdev3 00:09:54.631 BaseBdev4' 00:09:54.631 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.890 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:54.890 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.890 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.890 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:54.890 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.890 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.891 [2024-11-28 16:22:46.587199] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.891 [2024-11-28 16:22:46.587229] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.891 [2024-11-28 16:22:46.587296] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.891 [2024-11-28 16:22:46.587358] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.891 [2024-11-28 16:22:46.587368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82183 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82183 ']' 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82183 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82183 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:54.891 killing process with pid 82183 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82183' 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 82183 00:09:54.891 [2024-11-28 16:22:46.637493] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:54.891 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 82183 00:09:55.151 [2024-11-28 16:22:46.678055] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:55.438 16:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:55.438 00:09:55.438 real 0m9.372s 00:09:55.438 user 0m15.990s 00:09:55.438 sys 0m1.939s 00:09:55.438 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:55.438 16:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.438 ************************************ 00:09:55.438 END TEST raid_state_function_test 00:09:55.438 ************************************ 00:09:55.438 16:22:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:09:55.438 16:22:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:55.438 16:22:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:55.438 16:22:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:55.438 ************************************ 00:09:55.438 START TEST raid_state_function_test_sb 00:09:55.438 ************************************ 00:09:55.438 16:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:09:55.438 16:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:55.438 16:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:55.438 16:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:55.438 16:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:55.438 16:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:55.438 16:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.438 16:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:55.438 16:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.438 16:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.438 16:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:55.438 16:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.438 16:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.438 16:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:55.438 16:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.438 16:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.438 16:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:55.438 16:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.438 16:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.438 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:55.438 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:55.438 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:55.438 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:55.438 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:55.438 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:55.438 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:55.438 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:55.438 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:55.438 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:55.438 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:55.438 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82832 00:09:55.439 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:55.439 Process raid pid: 82832 00:09:55.439 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82832' 00:09:55.439 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82832 00:09:55.439 16:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82832 ']' 00:09:55.439 16:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.439 16:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:55.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.439 16:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.439 16:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:55.439 16:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.439 [2024-11-28 16:22:47.082072] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:55.439 [2024-11-28 16:22:47.082197] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.698 [2024-11-28 16:22:47.243684] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.698 [2024-11-28 16:22:47.288504] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.698 [2024-11-28 16:22:47.329389] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.698 [2024-11-28 16:22:47.329428] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.268 [2024-11-28 16:22:47.914201] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:56.268 [2024-11-28 16:22:47.914251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:56.268 [2024-11-28 16:22:47.914263] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.268 [2024-11-28 16:22:47.914272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.268 [2024-11-28 16:22:47.914278] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:56.268 [2024-11-28 16:22:47.914382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:56.268 [2024-11-28 16:22:47.914389] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:56.268 [2024-11-28 16:22:47.914399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.268 "name": "Existed_Raid", 00:09:56.268 "uuid": "64afdbe1-0d26-4e09-a235-33f8fd033cb6", 00:09:56.268 "strip_size_kb": 64, 00:09:56.268 "state": "configuring", 00:09:56.268 "raid_level": "concat", 00:09:56.268 "superblock": true, 00:09:56.268 "num_base_bdevs": 4, 00:09:56.268 "num_base_bdevs_discovered": 0, 00:09:56.268 "num_base_bdevs_operational": 4, 00:09:56.268 "base_bdevs_list": [ 00:09:56.268 { 00:09:56.268 "name": "BaseBdev1", 00:09:56.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.268 "is_configured": false, 00:09:56.268 "data_offset": 0, 00:09:56.268 "data_size": 0 00:09:56.268 }, 00:09:56.268 { 00:09:56.268 "name": "BaseBdev2", 00:09:56.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.268 "is_configured": false, 00:09:56.268 "data_offset": 0, 00:09:56.268 "data_size": 0 00:09:56.268 }, 00:09:56.268 { 00:09:56.268 "name": "BaseBdev3", 00:09:56.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.268 "is_configured": false, 00:09:56.268 "data_offset": 0, 00:09:56.268 "data_size": 0 00:09:56.268 }, 00:09:56.268 { 00:09:56.268 "name": "BaseBdev4", 00:09:56.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.268 "is_configured": false, 00:09:56.268 "data_offset": 0, 00:09:56.268 "data_size": 0 00:09:56.268 } 00:09:56.268 ] 00:09:56.268 }' 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.268 16:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.839 [2024-11-28 16:22:48.349365] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:56.839 [2024-11-28 16:22:48.349409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.839 [2024-11-28 16:22:48.361370] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:56.839 [2024-11-28 16:22:48.361413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:56.839 [2024-11-28 16:22:48.361421] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.839 [2024-11-28 16:22:48.361445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.839 [2024-11-28 16:22:48.361452] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:56.839 [2024-11-28 16:22:48.361461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:56.839 [2024-11-28 16:22:48.361467] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:56.839 [2024-11-28 16:22:48.361476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.839 [2024-11-28 16:22:48.381894] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.839 BaseBdev1 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.839 [ 00:09:56.839 { 00:09:56.839 "name": "BaseBdev1", 00:09:56.839 "aliases": [ 00:09:56.839 "b6b0236c-bbe4-43d7-b17d-5014e3482cb7" 00:09:56.839 ], 00:09:56.839 "product_name": "Malloc disk", 00:09:56.839 "block_size": 512, 00:09:56.839 "num_blocks": 65536, 00:09:56.839 "uuid": "b6b0236c-bbe4-43d7-b17d-5014e3482cb7", 00:09:56.839 "assigned_rate_limits": { 00:09:56.839 "rw_ios_per_sec": 0, 00:09:56.839 "rw_mbytes_per_sec": 0, 00:09:56.839 "r_mbytes_per_sec": 0, 00:09:56.839 "w_mbytes_per_sec": 0 00:09:56.839 }, 00:09:56.839 "claimed": true, 00:09:56.839 "claim_type": "exclusive_write", 00:09:56.839 "zoned": false, 00:09:56.839 "supported_io_types": { 00:09:56.839 "read": true, 00:09:56.839 "write": true, 00:09:56.839 "unmap": true, 00:09:56.839 "flush": true, 00:09:56.839 "reset": true, 00:09:56.839 "nvme_admin": false, 00:09:56.839 "nvme_io": false, 00:09:56.839 "nvme_io_md": false, 00:09:56.839 "write_zeroes": true, 00:09:56.839 "zcopy": true, 00:09:56.839 "get_zone_info": false, 00:09:56.839 "zone_management": false, 00:09:56.839 "zone_append": false, 00:09:56.839 "compare": false, 00:09:56.839 "compare_and_write": false, 00:09:56.839 "abort": true, 00:09:56.839 "seek_hole": false, 00:09:56.839 "seek_data": false, 00:09:56.839 "copy": true, 00:09:56.839 "nvme_iov_md": false 00:09:56.839 }, 00:09:56.839 "memory_domains": [ 00:09:56.839 { 00:09:56.839 "dma_device_id": "system", 00:09:56.839 "dma_device_type": 1 00:09:56.839 }, 00:09:56.839 { 00:09:56.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.839 "dma_device_type": 2 00:09:56.839 } 00:09:56.839 ], 00:09:56.839 "driver_specific": {} 00:09:56.839 } 00:09:56.839 ] 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:56.839 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:56.840 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.840 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.840 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.840 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.840 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.840 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.840 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.840 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.840 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.840 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.840 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.840 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.840 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.840 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.840 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.840 "name": "Existed_Raid", 00:09:56.840 "uuid": "ae0b02f2-35fe-4a3e-a0da-0d2c53041602", 00:09:56.840 "strip_size_kb": 64, 00:09:56.840 "state": "configuring", 00:09:56.840 "raid_level": "concat", 00:09:56.840 "superblock": true, 00:09:56.840 "num_base_bdevs": 4, 00:09:56.840 "num_base_bdevs_discovered": 1, 00:09:56.840 "num_base_bdevs_operational": 4, 00:09:56.840 "base_bdevs_list": [ 00:09:56.840 { 00:09:56.840 "name": "BaseBdev1", 00:09:56.840 "uuid": "b6b0236c-bbe4-43d7-b17d-5014e3482cb7", 00:09:56.840 "is_configured": true, 00:09:56.840 "data_offset": 2048, 00:09:56.840 "data_size": 63488 00:09:56.840 }, 00:09:56.840 { 00:09:56.840 "name": "BaseBdev2", 00:09:56.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.840 "is_configured": false, 00:09:56.840 "data_offset": 0, 00:09:56.840 "data_size": 0 00:09:56.840 }, 00:09:56.840 { 00:09:56.840 "name": "BaseBdev3", 00:09:56.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.840 "is_configured": false, 00:09:56.840 "data_offset": 0, 00:09:56.840 "data_size": 0 00:09:56.840 }, 00:09:56.840 { 00:09:56.840 "name": "BaseBdev4", 00:09:56.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.840 "is_configured": false, 00:09:56.840 "data_offset": 0, 00:09:56.840 "data_size": 0 00:09:56.840 } 00:09:56.840 ] 00:09:56.840 }' 00:09:56.840 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.840 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.102 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:57.102 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.102 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.102 [2024-11-28 16:22:48.857097] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:57.102 [2024-11-28 16:22:48.857143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:57.102 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.103 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:57.103 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.103 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.103 [2024-11-28 16:22:48.869115] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.103 [2024-11-28 16:22:48.870899] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:57.103 [2024-11-28 16:22:48.870938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:57.103 [2024-11-28 16:22:48.870948] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:57.103 [2024-11-28 16:22:48.870956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:57.103 [2024-11-28 16:22:48.870962] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:57.103 [2024-11-28 16:22:48.870970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:57.362 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.362 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:57.362 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.363 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:57.363 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.363 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.363 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.363 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.363 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.363 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.363 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.363 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.363 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.363 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.363 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.363 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.363 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.363 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.363 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.363 "name": "Existed_Raid", 00:09:57.363 "uuid": "07722d4f-d3bf-4cdf-b1d0-1bde8ed860de", 00:09:57.363 "strip_size_kb": 64, 00:09:57.363 "state": "configuring", 00:09:57.363 "raid_level": "concat", 00:09:57.363 "superblock": true, 00:09:57.363 "num_base_bdevs": 4, 00:09:57.363 "num_base_bdevs_discovered": 1, 00:09:57.363 "num_base_bdevs_operational": 4, 00:09:57.363 "base_bdevs_list": [ 00:09:57.363 { 00:09:57.363 "name": "BaseBdev1", 00:09:57.363 "uuid": "b6b0236c-bbe4-43d7-b17d-5014e3482cb7", 00:09:57.363 "is_configured": true, 00:09:57.363 "data_offset": 2048, 00:09:57.363 "data_size": 63488 00:09:57.363 }, 00:09:57.363 { 00:09:57.363 "name": "BaseBdev2", 00:09:57.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.363 "is_configured": false, 00:09:57.363 "data_offset": 0, 00:09:57.363 "data_size": 0 00:09:57.363 }, 00:09:57.363 { 00:09:57.363 "name": "BaseBdev3", 00:09:57.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.363 "is_configured": false, 00:09:57.363 "data_offset": 0, 00:09:57.363 "data_size": 0 00:09:57.363 }, 00:09:57.363 { 00:09:57.363 "name": "BaseBdev4", 00:09:57.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.363 "is_configured": false, 00:09:57.363 "data_offset": 0, 00:09:57.363 "data_size": 0 00:09:57.363 } 00:09:57.363 ] 00:09:57.363 }' 00:09:57.363 16:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.363 16:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.623 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:57.623 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.623 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.623 [2024-11-28 16:22:49.285369] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.623 BaseBdev2 00:09:57.623 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.623 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:57.623 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:57.623 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:57.623 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:57.623 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:57.623 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:57.623 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:57.623 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.623 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.623 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.623 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:57.623 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.623 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.623 [ 00:09:57.623 { 00:09:57.623 "name": "BaseBdev2", 00:09:57.623 "aliases": [ 00:09:57.623 "549f7434-e4b7-4c41-a647-d55d6b9a6c3b" 00:09:57.623 ], 00:09:57.623 "product_name": "Malloc disk", 00:09:57.623 "block_size": 512, 00:09:57.623 "num_blocks": 65536, 00:09:57.623 "uuid": "549f7434-e4b7-4c41-a647-d55d6b9a6c3b", 00:09:57.623 "assigned_rate_limits": { 00:09:57.623 "rw_ios_per_sec": 0, 00:09:57.623 "rw_mbytes_per_sec": 0, 00:09:57.623 "r_mbytes_per_sec": 0, 00:09:57.623 "w_mbytes_per_sec": 0 00:09:57.623 }, 00:09:57.623 "claimed": true, 00:09:57.623 "claim_type": "exclusive_write", 00:09:57.623 "zoned": false, 00:09:57.623 "supported_io_types": { 00:09:57.623 "read": true, 00:09:57.623 "write": true, 00:09:57.623 "unmap": true, 00:09:57.623 "flush": true, 00:09:57.623 "reset": true, 00:09:57.623 "nvme_admin": false, 00:09:57.623 "nvme_io": false, 00:09:57.623 "nvme_io_md": false, 00:09:57.623 "write_zeroes": true, 00:09:57.623 "zcopy": true, 00:09:57.623 "get_zone_info": false, 00:09:57.623 "zone_management": false, 00:09:57.623 "zone_append": false, 00:09:57.623 "compare": false, 00:09:57.623 "compare_and_write": false, 00:09:57.623 "abort": true, 00:09:57.623 "seek_hole": false, 00:09:57.623 "seek_data": false, 00:09:57.623 "copy": true, 00:09:57.623 "nvme_iov_md": false 00:09:57.623 }, 00:09:57.623 "memory_domains": [ 00:09:57.623 { 00:09:57.623 "dma_device_id": "system", 00:09:57.623 "dma_device_type": 1 00:09:57.623 }, 00:09:57.623 { 00:09:57.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.623 "dma_device_type": 2 00:09:57.623 } 00:09:57.623 ], 00:09:57.623 "driver_specific": {} 00:09:57.623 } 00:09:57.623 ] 00:09:57.623 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.623 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:57.623 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:57.623 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.624 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:57.624 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.624 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.624 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.624 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.624 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.624 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.624 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.624 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.624 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.624 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.624 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.624 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.624 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.624 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.624 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.624 "name": "Existed_Raid", 00:09:57.624 "uuid": "07722d4f-d3bf-4cdf-b1d0-1bde8ed860de", 00:09:57.624 "strip_size_kb": 64, 00:09:57.624 "state": "configuring", 00:09:57.624 "raid_level": "concat", 00:09:57.624 "superblock": true, 00:09:57.624 "num_base_bdevs": 4, 00:09:57.624 "num_base_bdevs_discovered": 2, 00:09:57.624 "num_base_bdevs_operational": 4, 00:09:57.624 "base_bdevs_list": [ 00:09:57.624 { 00:09:57.624 "name": "BaseBdev1", 00:09:57.624 "uuid": "b6b0236c-bbe4-43d7-b17d-5014e3482cb7", 00:09:57.624 "is_configured": true, 00:09:57.624 "data_offset": 2048, 00:09:57.624 "data_size": 63488 00:09:57.624 }, 00:09:57.624 { 00:09:57.624 "name": "BaseBdev2", 00:09:57.624 "uuid": "549f7434-e4b7-4c41-a647-d55d6b9a6c3b", 00:09:57.624 "is_configured": true, 00:09:57.624 "data_offset": 2048, 00:09:57.624 "data_size": 63488 00:09:57.624 }, 00:09:57.624 { 00:09:57.624 "name": "BaseBdev3", 00:09:57.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.624 "is_configured": false, 00:09:57.624 "data_offset": 0, 00:09:57.624 "data_size": 0 00:09:57.624 }, 00:09:57.624 { 00:09:57.624 "name": "BaseBdev4", 00:09:57.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.624 "is_configured": false, 00:09:57.624 "data_offset": 0, 00:09:57.624 "data_size": 0 00:09:57.624 } 00:09:57.624 ] 00:09:57.624 }' 00:09:57.624 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.624 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.194 [2024-11-28 16:22:49.807377] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:58.194 BaseBdev3 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.194 [ 00:09:58.194 { 00:09:58.194 "name": "BaseBdev3", 00:09:58.194 "aliases": [ 00:09:58.194 "8f323780-6d7c-45dd-acbd-cb0ea3153aef" 00:09:58.194 ], 00:09:58.194 "product_name": "Malloc disk", 00:09:58.194 "block_size": 512, 00:09:58.194 "num_blocks": 65536, 00:09:58.194 "uuid": "8f323780-6d7c-45dd-acbd-cb0ea3153aef", 00:09:58.194 "assigned_rate_limits": { 00:09:58.194 "rw_ios_per_sec": 0, 00:09:58.194 "rw_mbytes_per_sec": 0, 00:09:58.194 "r_mbytes_per_sec": 0, 00:09:58.194 "w_mbytes_per_sec": 0 00:09:58.194 }, 00:09:58.194 "claimed": true, 00:09:58.194 "claim_type": "exclusive_write", 00:09:58.194 "zoned": false, 00:09:58.194 "supported_io_types": { 00:09:58.194 "read": true, 00:09:58.194 "write": true, 00:09:58.194 "unmap": true, 00:09:58.194 "flush": true, 00:09:58.194 "reset": true, 00:09:58.194 "nvme_admin": false, 00:09:58.194 "nvme_io": false, 00:09:58.194 "nvme_io_md": false, 00:09:58.194 "write_zeroes": true, 00:09:58.194 "zcopy": true, 00:09:58.194 "get_zone_info": false, 00:09:58.194 "zone_management": false, 00:09:58.194 "zone_append": false, 00:09:58.194 "compare": false, 00:09:58.194 "compare_and_write": false, 00:09:58.194 "abort": true, 00:09:58.194 "seek_hole": false, 00:09:58.194 "seek_data": false, 00:09:58.194 "copy": true, 00:09:58.194 "nvme_iov_md": false 00:09:58.194 }, 00:09:58.194 "memory_domains": [ 00:09:58.194 { 00:09:58.194 "dma_device_id": "system", 00:09:58.194 "dma_device_type": 1 00:09:58.194 }, 00:09:58.194 { 00:09:58.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.194 "dma_device_type": 2 00:09:58.194 } 00:09:58.194 ], 00:09:58.194 "driver_specific": {} 00:09:58.194 } 00:09:58.194 ] 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.194 "name": "Existed_Raid", 00:09:58.194 "uuid": "07722d4f-d3bf-4cdf-b1d0-1bde8ed860de", 00:09:58.194 "strip_size_kb": 64, 00:09:58.194 "state": "configuring", 00:09:58.194 "raid_level": "concat", 00:09:58.194 "superblock": true, 00:09:58.194 "num_base_bdevs": 4, 00:09:58.194 "num_base_bdevs_discovered": 3, 00:09:58.194 "num_base_bdevs_operational": 4, 00:09:58.194 "base_bdevs_list": [ 00:09:58.194 { 00:09:58.194 "name": "BaseBdev1", 00:09:58.194 "uuid": "b6b0236c-bbe4-43d7-b17d-5014e3482cb7", 00:09:58.194 "is_configured": true, 00:09:58.194 "data_offset": 2048, 00:09:58.194 "data_size": 63488 00:09:58.194 }, 00:09:58.194 { 00:09:58.194 "name": "BaseBdev2", 00:09:58.194 "uuid": "549f7434-e4b7-4c41-a647-d55d6b9a6c3b", 00:09:58.194 "is_configured": true, 00:09:58.194 "data_offset": 2048, 00:09:58.194 "data_size": 63488 00:09:58.194 }, 00:09:58.194 { 00:09:58.194 "name": "BaseBdev3", 00:09:58.194 "uuid": "8f323780-6d7c-45dd-acbd-cb0ea3153aef", 00:09:58.194 "is_configured": true, 00:09:58.194 "data_offset": 2048, 00:09:58.194 "data_size": 63488 00:09:58.194 }, 00:09:58.194 { 00:09:58.194 "name": "BaseBdev4", 00:09:58.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.194 "is_configured": false, 00:09:58.194 "data_offset": 0, 00:09:58.194 "data_size": 0 00:09:58.194 } 00:09:58.194 ] 00:09:58.194 }' 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.194 16:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.764 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:58.764 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.764 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.764 [2024-11-28 16:22:50.273484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:58.764 [2024-11-28 16:22:50.273693] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:58.764 [2024-11-28 16:22:50.273708] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:58.764 BaseBdev4 00:09:58.764 [2024-11-28 16:22:50.273991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:58.764 [2024-11-28 16:22:50.274127] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:58.764 [2024-11-28 16:22:50.274155] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:58.764 [2024-11-28 16:22:50.274259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.764 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.764 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:58.764 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:58.764 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:58.764 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.765 [ 00:09:58.765 { 00:09:58.765 "name": "BaseBdev4", 00:09:58.765 "aliases": [ 00:09:58.765 "cc22fa0d-6f6e-43ab-935d-cf8d9d4672e6" 00:09:58.765 ], 00:09:58.765 "product_name": "Malloc disk", 00:09:58.765 "block_size": 512, 00:09:58.765 "num_blocks": 65536, 00:09:58.765 "uuid": "cc22fa0d-6f6e-43ab-935d-cf8d9d4672e6", 00:09:58.765 "assigned_rate_limits": { 00:09:58.765 "rw_ios_per_sec": 0, 00:09:58.765 "rw_mbytes_per_sec": 0, 00:09:58.765 "r_mbytes_per_sec": 0, 00:09:58.765 "w_mbytes_per_sec": 0 00:09:58.765 }, 00:09:58.765 "claimed": true, 00:09:58.765 "claim_type": "exclusive_write", 00:09:58.765 "zoned": false, 00:09:58.765 "supported_io_types": { 00:09:58.765 "read": true, 00:09:58.765 "write": true, 00:09:58.765 "unmap": true, 00:09:58.765 "flush": true, 00:09:58.765 "reset": true, 00:09:58.765 "nvme_admin": false, 00:09:58.765 "nvme_io": false, 00:09:58.765 "nvme_io_md": false, 00:09:58.765 "write_zeroes": true, 00:09:58.765 "zcopy": true, 00:09:58.765 "get_zone_info": false, 00:09:58.765 "zone_management": false, 00:09:58.765 "zone_append": false, 00:09:58.765 "compare": false, 00:09:58.765 "compare_and_write": false, 00:09:58.765 "abort": true, 00:09:58.765 "seek_hole": false, 00:09:58.765 "seek_data": false, 00:09:58.765 "copy": true, 00:09:58.765 "nvme_iov_md": false 00:09:58.765 }, 00:09:58.765 "memory_domains": [ 00:09:58.765 { 00:09:58.765 "dma_device_id": "system", 00:09:58.765 "dma_device_type": 1 00:09:58.765 }, 00:09:58.765 { 00:09:58.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.765 "dma_device_type": 2 00:09:58.765 } 00:09:58.765 ], 00:09:58.765 "driver_specific": {} 00:09:58.765 } 00:09:58.765 ] 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.765 "name": "Existed_Raid", 00:09:58.765 "uuid": "07722d4f-d3bf-4cdf-b1d0-1bde8ed860de", 00:09:58.765 "strip_size_kb": 64, 00:09:58.765 "state": "online", 00:09:58.765 "raid_level": "concat", 00:09:58.765 "superblock": true, 00:09:58.765 "num_base_bdevs": 4, 00:09:58.765 "num_base_bdevs_discovered": 4, 00:09:58.765 "num_base_bdevs_operational": 4, 00:09:58.765 "base_bdevs_list": [ 00:09:58.765 { 00:09:58.765 "name": "BaseBdev1", 00:09:58.765 "uuid": "b6b0236c-bbe4-43d7-b17d-5014e3482cb7", 00:09:58.765 "is_configured": true, 00:09:58.765 "data_offset": 2048, 00:09:58.765 "data_size": 63488 00:09:58.765 }, 00:09:58.765 { 00:09:58.765 "name": "BaseBdev2", 00:09:58.765 "uuid": "549f7434-e4b7-4c41-a647-d55d6b9a6c3b", 00:09:58.765 "is_configured": true, 00:09:58.765 "data_offset": 2048, 00:09:58.765 "data_size": 63488 00:09:58.765 }, 00:09:58.765 { 00:09:58.765 "name": "BaseBdev3", 00:09:58.765 "uuid": "8f323780-6d7c-45dd-acbd-cb0ea3153aef", 00:09:58.765 "is_configured": true, 00:09:58.765 "data_offset": 2048, 00:09:58.765 "data_size": 63488 00:09:58.765 }, 00:09:58.765 { 00:09:58.765 "name": "BaseBdev4", 00:09:58.765 "uuid": "cc22fa0d-6f6e-43ab-935d-cf8d9d4672e6", 00:09:58.765 "is_configured": true, 00:09:58.765 "data_offset": 2048, 00:09:58.765 "data_size": 63488 00:09:58.765 } 00:09:58.765 ] 00:09:58.765 }' 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.765 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.025 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:59.025 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:59.025 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:59.025 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:59.025 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:59.025 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:59.025 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:59.025 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:59.025 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.025 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.025 [2024-11-28 16:22:50.721130] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.025 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.025 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:59.025 "name": "Existed_Raid", 00:09:59.025 "aliases": [ 00:09:59.025 "07722d4f-d3bf-4cdf-b1d0-1bde8ed860de" 00:09:59.025 ], 00:09:59.025 "product_name": "Raid Volume", 00:09:59.025 "block_size": 512, 00:09:59.025 "num_blocks": 253952, 00:09:59.025 "uuid": "07722d4f-d3bf-4cdf-b1d0-1bde8ed860de", 00:09:59.025 "assigned_rate_limits": { 00:09:59.025 "rw_ios_per_sec": 0, 00:09:59.025 "rw_mbytes_per_sec": 0, 00:09:59.025 "r_mbytes_per_sec": 0, 00:09:59.025 "w_mbytes_per_sec": 0 00:09:59.025 }, 00:09:59.025 "claimed": false, 00:09:59.025 "zoned": false, 00:09:59.025 "supported_io_types": { 00:09:59.025 "read": true, 00:09:59.025 "write": true, 00:09:59.025 "unmap": true, 00:09:59.025 "flush": true, 00:09:59.025 "reset": true, 00:09:59.025 "nvme_admin": false, 00:09:59.025 "nvme_io": false, 00:09:59.025 "nvme_io_md": false, 00:09:59.025 "write_zeroes": true, 00:09:59.025 "zcopy": false, 00:09:59.025 "get_zone_info": false, 00:09:59.025 "zone_management": false, 00:09:59.025 "zone_append": false, 00:09:59.025 "compare": false, 00:09:59.025 "compare_and_write": false, 00:09:59.025 "abort": false, 00:09:59.025 "seek_hole": false, 00:09:59.025 "seek_data": false, 00:09:59.025 "copy": false, 00:09:59.025 "nvme_iov_md": false 00:09:59.025 }, 00:09:59.025 "memory_domains": [ 00:09:59.025 { 00:09:59.025 "dma_device_id": "system", 00:09:59.025 "dma_device_type": 1 00:09:59.025 }, 00:09:59.025 { 00:09:59.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.025 "dma_device_type": 2 00:09:59.025 }, 00:09:59.025 { 00:09:59.025 "dma_device_id": "system", 00:09:59.025 "dma_device_type": 1 00:09:59.025 }, 00:09:59.025 { 00:09:59.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.025 "dma_device_type": 2 00:09:59.025 }, 00:09:59.025 { 00:09:59.025 "dma_device_id": "system", 00:09:59.025 "dma_device_type": 1 00:09:59.025 }, 00:09:59.025 { 00:09:59.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.026 "dma_device_type": 2 00:09:59.026 }, 00:09:59.026 { 00:09:59.026 "dma_device_id": "system", 00:09:59.026 "dma_device_type": 1 00:09:59.026 }, 00:09:59.026 { 00:09:59.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.026 "dma_device_type": 2 00:09:59.026 } 00:09:59.026 ], 00:09:59.026 "driver_specific": { 00:09:59.026 "raid": { 00:09:59.026 "uuid": "07722d4f-d3bf-4cdf-b1d0-1bde8ed860de", 00:09:59.026 "strip_size_kb": 64, 00:09:59.026 "state": "online", 00:09:59.026 "raid_level": "concat", 00:09:59.026 "superblock": true, 00:09:59.026 "num_base_bdevs": 4, 00:09:59.026 "num_base_bdevs_discovered": 4, 00:09:59.026 "num_base_bdevs_operational": 4, 00:09:59.026 "base_bdevs_list": [ 00:09:59.026 { 00:09:59.026 "name": "BaseBdev1", 00:09:59.026 "uuid": "b6b0236c-bbe4-43d7-b17d-5014e3482cb7", 00:09:59.026 "is_configured": true, 00:09:59.026 "data_offset": 2048, 00:09:59.026 "data_size": 63488 00:09:59.026 }, 00:09:59.026 { 00:09:59.026 "name": "BaseBdev2", 00:09:59.026 "uuid": "549f7434-e4b7-4c41-a647-d55d6b9a6c3b", 00:09:59.026 "is_configured": true, 00:09:59.026 "data_offset": 2048, 00:09:59.026 "data_size": 63488 00:09:59.026 }, 00:09:59.026 { 00:09:59.026 "name": "BaseBdev3", 00:09:59.026 "uuid": "8f323780-6d7c-45dd-acbd-cb0ea3153aef", 00:09:59.026 "is_configured": true, 00:09:59.026 "data_offset": 2048, 00:09:59.026 "data_size": 63488 00:09:59.026 }, 00:09:59.026 { 00:09:59.026 "name": "BaseBdev4", 00:09:59.026 "uuid": "cc22fa0d-6f6e-43ab-935d-cf8d9d4672e6", 00:09:59.026 "is_configured": true, 00:09:59.026 "data_offset": 2048, 00:09:59.026 "data_size": 63488 00:09:59.026 } 00:09:59.026 ] 00:09:59.026 } 00:09:59.026 } 00:09:59.026 }' 00:09:59.026 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:59.026 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:59.026 BaseBdev2 00:09:59.026 BaseBdev3 00:09:59.026 BaseBdev4' 00:09:59.026 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.286 16:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.286 [2024-11-28 16:22:51.028260] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:59.286 [2024-11-28 16:22:51.028292] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.286 [2024-11-28 16:22:51.028353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.286 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.546 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.546 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.546 "name": "Existed_Raid", 00:09:59.546 "uuid": "07722d4f-d3bf-4cdf-b1d0-1bde8ed860de", 00:09:59.546 "strip_size_kb": 64, 00:09:59.546 "state": "offline", 00:09:59.546 "raid_level": "concat", 00:09:59.546 "superblock": true, 00:09:59.546 "num_base_bdevs": 4, 00:09:59.546 "num_base_bdevs_discovered": 3, 00:09:59.546 "num_base_bdevs_operational": 3, 00:09:59.546 "base_bdevs_list": [ 00:09:59.546 { 00:09:59.546 "name": null, 00:09:59.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.546 "is_configured": false, 00:09:59.546 "data_offset": 0, 00:09:59.546 "data_size": 63488 00:09:59.546 }, 00:09:59.546 { 00:09:59.546 "name": "BaseBdev2", 00:09:59.546 "uuid": "549f7434-e4b7-4c41-a647-d55d6b9a6c3b", 00:09:59.546 "is_configured": true, 00:09:59.546 "data_offset": 2048, 00:09:59.546 "data_size": 63488 00:09:59.546 }, 00:09:59.547 { 00:09:59.547 "name": "BaseBdev3", 00:09:59.547 "uuid": "8f323780-6d7c-45dd-acbd-cb0ea3153aef", 00:09:59.547 "is_configured": true, 00:09:59.547 "data_offset": 2048, 00:09:59.547 "data_size": 63488 00:09:59.547 }, 00:09:59.547 { 00:09:59.547 "name": "BaseBdev4", 00:09:59.547 "uuid": "cc22fa0d-6f6e-43ab-935d-cf8d9d4672e6", 00:09:59.547 "is_configured": true, 00:09:59.547 "data_offset": 2048, 00:09:59.547 "data_size": 63488 00:09:59.547 } 00:09:59.547 ] 00:09:59.547 }' 00:09:59.547 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.547 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.807 [2024-11-28 16:22:51.502662] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.807 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.807 [2024-11-28 16:22:51.573619] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.067 [2024-11-28 16:22:51.640661] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:00.067 [2024-11-28 16:22:51.640707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.067 BaseBdev2 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.067 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.067 [ 00:10:00.067 { 00:10:00.067 "name": "BaseBdev2", 00:10:00.067 "aliases": [ 00:10:00.067 "0157c93b-9f06-415f-acbf-b27a35a66b3e" 00:10:00.067 ], 00:10:00.067 "product_name": "Malloc disk", 00:10:00.067 "block_size": 512, 00:10:00.067 "num_blocks": 65536, 00:10:00.067 "uuid": "0157c93b-9f06-415f-acbf-b27a35a66b3e", 00:10:00.067 "assigned_rate_limits": { 00:10:00.067 "rw_ios_per_sec": 0, 00:10:00.067 "rw_mbytes_per_sec": 0, 00:10:00.067 "r_mbytes_per_sec": 0, 00:10:00.067 "w_mbytes_per_sec": 0 00:10:00.067 }, 00:10:00.067 "claimed": false, 00:10:00.067 "zoned": false, 00:10:00.067 "supported_io_types": { 00:10:00.067 "read": true, 00:10:00.067 "write": true, 00:10:00.067 "unmap": true, 00:10:00.067 "flush": true, 00:10:00.067 "reset": true, 00:10:00.067 "nvme_admin": false, 00:10:00.067 "nvme_io": false, 00:10:00.067 "nvme_io_md": false, 00:10:00.067 "write_zeroes": true, 00:10:00.067 "zcopy": true, 00:10:00.067 "get_zone_info": false, 00:10:00.067 "zone_management": false, 00:10:00.067 "zone_append": false, 00:10:00.068 "compare": false, 00:10:00.068 "compare_and_write": false, 00:10:00.068 "abort": true, 00:10:00.068 "seek_hole": false, 00:10:00.068 "seek_data": false, 00:10:00.068 "copy": true, 00:10:00.068 "nvme_iov_md": false 00:10:00.068 }, 00:10:00.068 "memory_domains": [ 00:10:00.068 { 00:10:00.068 "dma_device_id": "system", 00:10:00.068 "dma_device_type": 1 00:10:00.068 }, 00:10:00.068 { 00:10:00.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.068 "dma_device_type": 2 00:10:00.068 } 00:10:00.068 ], 00:10:00.068 "driver_specific": {} 00:10:00.068 } 00:10:00.068 ] 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.068 BaseBdev3 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.068 [ 00:10:00.068 { 00:10:00.068 "name": "BaseBdev3", 00:10:00.068 "aliases": [ 00:10:00.068 "a9481d0e-1102-472d-964b-c69f242451af" 00:10:00.068 ], 00:10:00.068 "product_name": "Malloc disk", 00:10:00.068 "block_size": 512, 00:10:00.068 "num_blocks": 65536, 00:10:00.068 "uuid": "a9481d0e-1102-472d-964b-c69f242451af", 00:10:00.068 "assigned_rate_limits": { 00:10:00.068 "rw_ios_per_sec": 0, 00:10:00.068 "rw_mbytes_per_sec": 0, 00:10:00.068 "r_mbytes_per_sec": 0, 00:10:00.068 "w_mbytes_per_sec": 0 00:10:00.068 }, 00:10:00.068 "claimed": false, 00:10:00.068 "zoned": false, 00:10:00.068 "supported_io_types": { 00:10:00.068 "read": true, 00:10:00.068 "write": true, 00:10:00.068 "unmap": true, 00:10:00.068 "flush": true, 00:10:00.068 "reset": true, 00:10:00.068 "nvme_admin": false, 00:10:00.068 "nvme_io": false, 00:10:00.068 "nvme_io_md": false, 00:10:00.068 "write_zeroes": true, 00:10:00.068 "zcopy": true, 00:10:00.068 "get_zone_info": false, 00:10:00.068 "zone_management": false, 00:10:00.068 "zone_append": false, 00:10:00.068 "compare": false, 00:10:00.068 "compare_and_write": false, 00:10:00.068 "abort": true, 00:10:00.068 "seek_hole": false, 00:10:00.068 "seek_data": false, 00:10:00.068 "copy": true, 00:10:00.068 "nvme_iov_md": false 00:10:00.068 }, 00:10:00.068 "memory_domains": [ 00:10:00.068 { 00:10:00.068 "dma_device_id": "system", 00:10:00.068 "dma_device_type": 1 00:10:00.068 }, 00:10:00.068 { 00:10:00.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.068 "dma_device_type": 2 00:10:00.068 } 00:10:00.068 ], 00:10:00.068 "driver_specific": {} 00:10:00.068 } 00:10:00.068 ] 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.068 BaseBdev4 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.068 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.068 [ 00:10:00.068 { 00:10:00.068 "name": "BaseBdev4", 00:10:00.068 "aliases": [ 00:10:00.328 "9df29ed7-b3d1-4028-a4de-957e20cf13f3" 00:10:00.328 ], 00:10:00.328 "product_name": "Malloc disk", 00:10:00.328 "block_size": 512, 00:10:00.328 "num_blocks": 65536, 00:10:00.328 "uuid": "9df29ed7-b3d1-4028-a4de-957e20cf13f3", 00:10:00.328 "assigned_rate_limits": { 00:10:00.328 "rw_ios_per_sec": 0, 00:10:00.328 "rw_mbytes_per_sec": 0, 00:10:00.328 "r_mbytes_per_sec": 0, 00:10:00.328 "w_mbytes_per_sec": 0 00:10:00.328 }, 00:10:00.328 "claimed": false, 00:10:00.328 "zoned": false, 00:10:00.328 "supported_io_types": { 00:10:00.328 "read": true, 00:10:00.328 "write": true, 00:10:00.328 "unmap": true, 00:10:00.328 "flush": true, 00:10:00.328 "reset": true, 00:10:00.328 "nvme_admin": false, 00:10:00.328 "nvme_io": false, 00:10:00.328 "nvme_io_md": false, 00:10:00.328 "write_zeroes": true, 00:10:00.328 "zcopy": true, 00:10:00.328 "get_zone_info": false, 00:10:00.328 "zone_management": false, 00:10:00.328 "zone_append": false, 00:10:00.328 "compare": false, 00:10:00.328 "compare_and_write": false, 00:10:00.328 "abort": true, 00:10:00.328 "seek_hole": false, 00:10:00.328 "seek_data": false, 00:10:00.328 "copy": true, 00:10:00.328 "nvme_iov_md": false 00:10:00.328 }, 00:10:00.328 "memory_domains": [ 00:10:00.328 { 00:10:00.328 "dma_device_id": "system", 00:10:00.328 "dma_device_type": 1 00:10:00.328 }, 00:10:00.328 { 00:10:00.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.328 "dma_device_type": 2 00:10:00.328 } 00:10:00.328 ], 00:10:00.328 "driver_specific": {} 00:10:00.328 } 00:10:00.328 ] 00:10:00.328 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.328 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:00.328 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:00.328 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.328 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:00.328 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.328 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.328 [2024-11-28 16:22:51.855093] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:00.328 [2024-11-28 16:22:51.855135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:00.328 [2024-11-28 16:22:51.855157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.328 [2024-11-28 16:22:51.856920] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.328 [2024-11-28 16:22:51.856968] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:00.328 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.328 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:00.328 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.328 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.328 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.328 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.328 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.328 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.328 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.328 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.329 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.329 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.329 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.329 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.329 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.329 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.329 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.329 "name": "Existed_Raid", 00:10:00.329 "uuid": "f0831090-28c3-4209-8a4c-359431f6d6ee", 00:10:00.329 "strip_size_kb": 64, 00:10:00.329 "state": "configuring", 00:10:00.329 "raid_level": "concat", 00:10:00.329 "superblock": true, 00:10:00.329 "num_base_bdevs": 4, 00:10:00.329 "num_base_bdevs_discovered": 3, 00:10:00.329 "num_base_bdevs_operational": 4, 00:10:00.329 "base_bdevs_list": [ 00:10:00.329 { 00:10:00.329 "name": "BaseBdev1", 00:10:00.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.329 "is_configured": false, 00:10:00.329 "data_offset": 0, 00:10:00.329 "data_size": 0 00:10:00.329 }, 00:10:00.329 { 00:10:00.329 "name": "BaseBdev2", 00:10:00.329 "uuid": "0157c93b-9f06-415f-acbf-b27a35a66b3e", 00:10:00.329 "is_configured": true, 00:10:00.329 "data_offset": 2048, 00:10:00.329 "data_size": 63488 00:10:00.329 }, 00:10:00.329 { 00:10:00.329 "name": "BaseBdev3", 00:10:00.329 "uuid": "a9481d0e-1102-472d-964b-c69f242451af", 00:10:00.329 "is_configured": true, 00:10:00.329 "data_offset": 2048, 00:10:00.329 "data_size": 63488 00:10:00.329 }, 00:10:00.329 { 00:10:00.329 "name": "BaseBdev4", 00:10:00.329 "uuid": "9df29ed7-b3d1-4028-a4de-957e20cf13f3", 00:10:00.329 "is_configured": true, 00:10:00.329 "data_offset": 2048, 00:10:00.329 "data_size": 63488 00:10:00.329 } 00:10:00.329 ] 00:10:00.329 }' 00:10:00.329 16:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.329 16:22:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.589 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:00.589 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.589 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.589 [2024-11-28 16:22:52.242417] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:00.589 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.589 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:00.589 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.589 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.589 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.589 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.589 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.589 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.589 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.589 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.589 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.589 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.589 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.589 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.589 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.589 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.589 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.589 "name": "Existed_Raid", 00:10:00.589 "uuid": "f0831090-28c3-4209-8a4c-359431f6d6ee", 00:10:00.589 "strip_size_kb": 64, 00:10:00.589 "state": "configuring", 00:10:00.589 "raid_level": "concat", 00:10:00.589 "superblock": true, 00:10:00.589 "num_base_bdevs": 4, 00:10:00.589 "num_base_bdevs_discovered": 2, 00:10:00.589 "num_base_bdevs_operational": 4, 00:10:00.589 "base_bdevs_list": [ 00:10:00.589 { 00:10:00.589 "name": "BaseBdev1", 00:10:00.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.589 "is_configured": false, 00:10:00.589 "data_offset": 0, 00:10:00.589 "data_size": 0 00:10:00.589 }, 00:10:00.589 { 00:10:00.589 "name": null, 00:10:00.589 "uuid": "0157c93b-9f06-415f-acbf-b27a35a66b3e", 00:10:00.589 "is_configured": false, 00:10:00.589 "data_offset": 0, 00:10:00.589 "data_size": 63488 00:10:00.589 }, 00:10:00.589 { 00:10:00.589 "name": "BaseBdev3", 00:10:00.589 "uuid": "a9481d0e-1102-472d-964b-c69f242451af", 00:10:00.589 "is_configured": true, 00:10:00.589 "data_offset": 2048, 00:10:00.589 "data_size": 63488 00:10:00.589 }, 00:10:00.589 { 00:10:00.589 "name": "BaseBdev4", 00:10:00.589 "uuid": "9df29ed7-b3d1-4028-a4de-957e20cf13f3", 00:10:00.589 "is_configured": true, 00:10:00.589 "data_offset": 2048, 00:10:00.589 "data_size": 63488 00:10:00.589 } 00:10:00.589 ] 00:10:00.589 }' 00:10:00.589 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.589 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.159 [2024-11-28 16:22:52.764301] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.159 BaseBdev1 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.159 [ 00:10:01.159 { 00:10:01.159 "name": "BaseBdev1", 00:10:01.159 "aliases": [ 00:10:01.159 "8af3980d-fbd5-4ea3-a9d7-5889053402d4" 00:10:01.159 ], 00:10:01.159 "product_name": "Malloc disk", 00:10:01.159 "block_size": 512, 00:10:01.159 "num_blocks": 65536, 00:10:01.159 "uuid": "8af3980d-fbd5-4ea3-a9d7-5889053402d4", 00:10:01.159 "assigned_rate_limits": { 00:10:01.159 "rw_ios_per_sec": 0, 00:10:01.159 "rw_mbytes_per_sec": 0, 00:10:01.159 "r_mbytes_per_sec": 0, 00:10:01.159 "w_mbytes_per_sec": 0 00:10:01.159 }, 00:10:01.159 "claimed": true, 00:10:01.159 "claim_type": "exclusive_write", 00:10:01.159 "zoned": false, 00:10:01.159 "supported_io_types": { 00:10:01.159 "read": true, 00:10:01.159 "write": true, 00:10:01.159 "unmap": true, 00:10:01.159 "flush": true, 00:10:01.159 "reset": true, 00:10:01.159 "nvme_admin": false, 00:10:01.159 "nvme_io": false, 00:10:01.159 "nvme_io_md": false, 00:10:01.159 "write_zeroes": true, 00:10:01.159 "zcopy": true, 00:10:01.159 "get_zone_info": false, 00:10:01.159 "zone_management": false, 00:10:01.159 "zone_append": false, 00:10:01.159 "compare": false, 00:10:01.159 "compare_and_write": false, 00:10:01.159 "abort": true, 00:10:01.159 "seek_hole": false, 00:10:01.159 "seek_data": false, 00:10:01.159 "copy": true, 00:10:01.159 "nvme_iov_md": false 00:10:01.159 }, 00:10:01.159 "memory_domains": [ 00:10:01.159 { 00:10:01.159 "dma_device_id": "system", 00:10:01.159 "dma_device_type": 1 00:10:01.159 }, 00:10:01.159 { 00:10:01.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.159 "dma_device_type": 2 00:10:01.159 } 00:10:01.159 ], 00:10:01.159 "driver_specific": {} 00:10:01.159 } 00:10:01.159 ] 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.159 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.160 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.160 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.160 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.160 "name": "Existed_Raid", 00:10:01.160 "uuid": "f0831090-28c3-4209-8a4c-359431f6d6ee", 00:10:01.160 "strip_size_kb": 64, 00:10:01.160 "state": "configuring", 00:10:01.160 "raid_level": "concat", 00:10:01.160 "superblock": true, 00:10:01.160 "num_base_bdevs": 4, 00:10:01.160 "num_base_bdevs_discovered": 3, 00:10:01.160 "num_base_bdevs_operational": 4, 00:10:01.160 "base_bdevs_list": [ 00:10:01.160 { 00:10:01.160 "name": "BaseBdev1", 00:10:01.160 "uuid": "8af3980d-fbd5-4ea3-a9d7-5889053402d4", 00:10:01.160 "is_configured": true, 00:10:01.160 "data_offset": 2048, 00:10:01.160 "data_size": 63488 00:10:01.160 }, 00:10:01.160 { 00:10:01.160 "name": null, 00:10:01.160 "uuid": "0157c93b-9f06-415f-acbf-b27a35a66b3e", 00:10:01.160 "is_configured": false, 00:10:01.160 "data_offset": 0, 00:10:01.160 "data_size": 63488 00:10:01.160 }, 00:10:01.160 { 00:10:01.160 "name": "BaseBdev3", 00:10:01.160 "uuid": "a9481d0e-1102-472d-964b-c69f242451af", 00:10:01.160 "is_configured": true, 00:10:01.160 "data_offset": 2048, 00:10:01.160 "data_size": 63488 00:10:01.160 }, 00:10:01.160 { 00:10:01.160 "name": "BaseBdev4", 00:10:01.160 "uuid": "9df29ed7-b3d1-4028-a4de-957e20cf13f3", 00:10:01.160 "is_configured": true, 00:10:01.160 "data_offset": 2048, 00:10:01.160 "data_size": 63488 00:10:01.160 } 00:10:01.160 ] 00:10:01.160 }' 00:10:01.160 16:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.160 16:22:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.730 [2024-11-28 16:22:53.259479] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.730 "name": "Existed_Raid", 00:10:01.730 "uuid": "f0831090-28c3-4209-8a4c-359431f6d6ee", 00:10:01.730 "strip_size_kb": 64, 00:10:01.730 "state": "configuring", 00:10:01.730 "raid_level": "concat", 00:10:01.730 "superblock": true, 00:10:01.730 "num_base_bdevs": 4, 00:10:01.730 "num_base_bdevs_discovered": 2, 00:10:01.730 "num_base_bdevs_operational": 4, 00:10:01.730 "base_bdevs_list": [ 00:10:01.730 { 00:10:01.730 "name": "BaseBdev1", 00:10:01.730 "uuid": "8af3980d-fbd5-4ea3-a9d7-5889053402d4", 00:10:01.730 "is_configured": true, 00:10:01.730 "data_offset": 2048, 00:10:01.730 "data_size": 63488 00:10:01.730 }, 00:10:01.730 { 00:10:01.730 "name": null, 00:10:01.730 "uuid": "0157c93b-9f06-415f-acbf-b27a35a66b3e", 00:10:01.730 "is_configured": false, 00:10:01.730 "data_offset": 0, 00:10:01.730 "data_size": 63488 00:10:01.730 }, 00:10:01.730 { 00:10:01.730 "name": null, 00:10:01.730 "uuid": "a9481d0e-1102-472d-964b-c69f242451af", 00:10:01.730 "is_configured": false, 00:10:01.730 "data_offset": 0, 00:10:01.730 "data_size": 63488 00:10:01.730 }, 00:10:01.730 { 00:10:01.730 "name": "BaseBdev4", 00:10:01.730 "uuid": "9df29ed7-b3d1-4028-a4de-957e20cf13f3", 00:10:01.730 "is_configured": true, 00:10:01.730 "data_offset": 2048, 00:10:01.730 "data_size": 63488 00:10:01.730 } 00:10:01.730 ] 00:10:01.730 }' 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.730 16:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.990 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.990 16:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.990 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:01.990 16:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.990 16:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.250 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:02.250 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:02.250 16:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.250 16:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.250 [2024-11-28 16:22:53.766666] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.250 16:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.250 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:02.250 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.250 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.250 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.250 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.250 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.250 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.250 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.250 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.250 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.250 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.250 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.250 16:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.250 16:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.250 16:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.250 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.250 "name": "Existed_Raid", 00:10:02.250 "uuid": "f0831090-28c3-4209-8a4c-359431f6d6ee", 00:10:02.250 "strip_size_kb": 64, 00:10:02.250 "state": "configuring", 00:10:02.250 "raid_level": "concat", 00:10:02.250 "superblock": true, 00:10:02.250 "num_base_bdevs": 4, 00:10:02.250 "num_base_bdevs_discovered": 3, 00:10:02.250 "num_base_bdevs_operational": 4, 00:10:02.250 "base_bdevs_list": [ 00:10:02.250 { 00:10:02.250 "name": "BaseBdev1", 00:10:02.250 "uuid": "8af3980d-fbd5-4ea3-a9d7-5889053402d4", 00:10:02.250 "is_configured": true, 00:10:02.250 "data_offset": 2048, 00:10:02.250 "data_size": 63488 00:10:02.250 }, 00:10:02.250 { 00:10:02.250 "name": null, 00:10:02.250 "uuid": "0157c93b-9f06-415f-acbf-b27a35a66b3e", 00:10:02.250 "is_configured": false, 00:10:02.250 "data_offset": 0, 00:10:02.250 "data_size": 63488 00:10:02.250 }, 00:10:02.250 { 00:10:02.250 "name": "BaseBdev3", 00:10:02.250 "uuid": "a9481d0e-1102-472d-964b-c69f242451af", 00:10:02.250 "is_configured": true, 00:10:02.250 "data_offset": 2048, 00:10:02.250 "data_size": 63488 00:10:02.250 }, 00:10:02.250 { 00:10:02.250 "name": "BaseBdev4", 00:10:02.250 "uuid": "9df29ed7-b3d1-4028-a4de-957e20cf13f3", 00:10:02.250 "is_configured": true, 00:10:02.250 "data_offset": 2048, 00:10:02.250 "data_size": 63488 00:10:02.250 } 00:10:02.250 ] 00:10:02.250 }' 00:10:02.250 16:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.250 16:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.510 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.510 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:02.510 16:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.510 16:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.510 16:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.510 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:02.510 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:02.510 16:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.510 16:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.770 [2024-11-28 16:22:54.281777] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:02.770 16:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.770 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:02.770 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.770 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.770 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.770 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.770 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.770 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.770 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.770 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.770 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.770 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.770 16:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.770 16:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.770 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.770 16:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.770 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.770 "name": "Existed_Raid", 00:10:02.770 "uuid": "f0831090-28c3-4209-8a4c-359431f6d6ee", 00:10:02.770 "strip_size_kb": 64, 00:10:02.770 "state": "configuring", 00:10:02.770 "raid_level": "concat", 00:10:02.770 "superblock": true, 00:10:02.770 "num_base_bdevs": 4, 00:10:02.770 "num_base_bdevs_discovered": 2, 00:10:02.770 "num_base_bdevs_operational": 4, 00:10:02.770 "base_bdevs_list": [ 00:10:02.770 { 00:10:02.770 "name": null, 00:10:02.770 "uuid": "8af3980d-fbd5-4ea3-a9d7-5889053402d4", 00:10:02.770 "is_configured": false, 00:10:02.770 "data_offset": 0, 00:10:02.770 "data_size": 63488 00:10:02.770 }, 00:10:02.770 { 00:10:02.770 "name": null, 00:10:02.770 "uuid": "0157c93b-9f06-415f-acbf-b27a35a66b3e", 00:10:02.770 "is_configured": false, 00:10:02.770 "data_offset": 0, 00:10:02.770 "data_size": 63488 00:10:02.770 }, 00:10:02.770 { 00:10:02.770 "name": "BaseBdev3", 00:10:02.770 "uuid": "a9481d0e-1102-472d-964b-c69f242451af", 00:10:02.770 "is_configured": true, 00:10:02.770 "data_offset": 2048, 00:10:02.770 "data_size": 63488 00:10:02.770 }, 00:10:02.770 { 00:10:02.770 "name": "BaseBdev4", 00:10:02.770 "uuid": "9df29ed7-b3d1-4028-a4de-957e20cf13f3", 00:10:02.770 "is_configured": true, 00:10:02.770 "data_offset": 2048, 00:10:02.770 "data_size": 63488 00:10:02.770 } 00:10:02.770 ] 00:10:02.770 }' 00:10:02.770 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.770 16:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.030 [2024-11-28 16:22:54.735341] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.030 "name": "Existed_Raid", 00:10:03.030 "uuid": "f0831090-28c3-4209-8a4c-359431f6d6ee", 00:10:03.030 "strip_size_kb": 64, 00:10:03.030 "state": "configuring", 00:10:03.030 "raid_level": "concat", 00:10:03.030 "superblock": true, 00:10:03.030 "num_base_bdevs": 4, 00:10:03.030 "num_base_bdevs_discovered": 3, 00:10:03.030 "num_base_bdevs_operational": 4, 00:10:03.030 "base_bdevs_list": [ 00:10:03.030 { 00:10:03.030 "name": null, 00:10:03.030 "uuid": "8af3980d-fbd5-4ea3-a9d7-5889053402d4", 00:10:03.030 "is_configured": false, 00:10:03.030 "data_offset": 0, 00:10:03.030 "data_size": 63488 00:10:03.030 }, 00:10:03.030 { 00:10:03.030 "name": "BaseBdev2", 00:10:03.030 "uuid": "0157c93b-9f06-415f-acbf-b27a35a66b3e", 00:10:03.030 "is_configured": true, 00:10:03.030 "data_offset": 2048, 00:10:03.030 "data_size": 63488 00:10:03.030 }, 00:10:03.030 { 00:10:03.030 "name": "BaseBdev3", 00:10:03.030 "uuid": "a9481d0e-1102-472d-964b-c69f242451af", 00:10:03.030 "is_configured": true, 00:10:03.030 "data_offset": 2048, 00:10:03.030 "data_size": 63488 00:10:03.030 }, 00:10:03.030 { 00:10:03.030 "name": "BaseBdev4", 00:10:03.030 "uuid": "9df29ed7-b3d1-4028-a4de-957e20cf13f3", 00:10:03.030 "is_configured": true, 00:10:03.030 "data_offset": 2048, 00:10:03.030 "data_size": 63488 00:10:03.030 } 00:10:03.030 ] 00:10:03.030 }' 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.030 16:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8af3980d-fbd5-4ea3-a9d7-5889053402d4 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.599 [2024-11-28 16:22:55.249155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:03.599 [2024-11-28 16:22:55.249325] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:03.599 [2024-11-28 16:22:55.249338] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:03.599 [2024-11-28 16:22:55.249590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:03.599 NewBaseBdev 00:10:03.599 [2024-11-28 16:22:55.249708] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:03.599 [2024-11-28 16:22:55.249720] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:03.599 [2024-11-28 16:22:55.249805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.599 [ 00:10:03.599 { 00:10:03.599 "name": "NewBaseBdev", 00:10:03.599 "aliases": [ 00:10:03.599 "8af3980d-fbd5-4ea3-a9d7-5889053402d4" 00:10:03.599 ], 00:10:03.599 "product_name": "Malloc disk", 00:10:03.599 "block_size": 512, 00:10:03.599 "num_blocks": 65536, 00:10:03.599 "uuid": "8af3980d-fbd5-4ea3-a9d7-5889053402d4", 00:10:03.599 "assigned_rate_limits": { 00:10:03.599 "rw_ios_per_sec": 0, 00:10:03.599 "rw_mbytes_per_sec": 0, 00:10:03.599 "r_mbytes_per_sec": 0, 00:10:03.599 "w_mbytes_per_sec": 0 00:10:03.599 }, 00:10:03.599 "claimed": true, 00:10:03.599 "claim_type": "exclusive_write", 00:10:03.599 "zoned": false, 00:10:03.599 "supported_io_types": { 00:10:03.599 "read": true, 00:10:03.599 "write": true, 00:10:03.599 "unmap": true, 00:10:03.599 "flush": true, 00:10:03.599 "reset": true, 00:10:03.599 "nvme_admin": false, 00:10:03.599 "nvme_io": false, 00:10:03.599 "nvme_io_md": false, 00:10:03.599 "write_zeroes": true, 00:10:03.599 "zcopy": true, 00:10:03.599 "get_zone_info": false, 00:10:03.599 "zone_management": false, 00:10:03.599 "zone_append": false, 00:10:03.599 "compare": false, 00:10:03.599 "compare_and_write": false, 00:10:03.599 "abort": true, 00:10:03.599 "seek_hole": false, 00:10:03.599 "seek_data": false, 00:10:03.599 "copy": true, 00:10:03.599 "nvme_iov_md": false 00:10:03.599 }, 00:10:03.599 "memory_domains": [ 00:10:03.599 { 00:10:03.599 "dma_device_id": "system", 00:10:03.599 "dma_device_type": 1 00:10:03.599 }, 00:10:03.599 { 00:10:03.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.599 "dma_device_type": 2 00:10:03.599 } 00:10:03.599 ], 00:10:03.599 "driver_specific": {} 00:10:03.599 } 00:10:03.599 ] 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.599 "name": "Existed_Raid", 00:10:03.599 "uuid": "f0831090-28c3-4209-8a4c-359431f6d6ee", 00:10:03.599 "strip_size_kb": 64, 00:10:03.599 "state": "online", 00:10:03.599 "raid_level": "concat", 00:10:03.599 "superblock": true, 00:10:03.599 "num_base_bdevs": 4, 00:10:03.599 "num_base_bdevs_discovered": 4, 00:10:03.599 "num_base_bdevs_operational": 4, 00:10:03.599 "base_bdevs_list": [ 00:10:03.599 { 00:10:03.599 "name": "NewBaseBdev", 00:10:03.599 "uuid": "8af3980d-fbd5-4ea3-a9d7-5889053402d4", 00:10:03.599 "is_configured": true, 00:10:03.599 "data_offset": 2048, 00:10:03.599 "data_size": 63488 00:10:03.599 }, 00:10:03.599 { 00:10:03.599 "name": "BaseBdev2", 00:10:03.599 "uuid": "0157c93b-9f06-415f-acbf-b27a35a66b3e", 00:10:03.599 "is_configured": true, 00:10:03.599 "data_offset": 2048, 00:10:03.599 "data_size": 63488 00:10:03.599 }, 00:10:03.599 { 00:10:03.599 "name": "BaseBdev3", 00:10:03.599 "uuid": "a9481d0e-1102-472d-964b-c69f242451af", 00:10:03.599 "is_configured": true, 00:10:03.599 "data_offset": 2048, 00:10:03.599 "data_size": 63488 00:10:03.599 }, 00:10:03.599 { 00:10:03.599 "name": "BaseBdev4", 00:10:03.599 "uuid": "9df29ed7-b3d1-4028-a4de-957e20cf13f3", 00:10:03.599 "is_configured": true, 00:10:03.599 "data_offset": 2048, 00:10:03.599 "data_size": 63488 00:10:03.599 } 00:10:03.599 ] 00:10:03.599 }' 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.599 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.169 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:04.169 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:04.169 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:04.169 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:04.169 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:04.169 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:04.169 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:04.169 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:04.169 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.169 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.169 [2024-11-28 16:22:55.712716] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.169 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.169 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:04.169 "name": "Existed_Raid", 00:10:04.169 "aliases": [ 00:10:04.169 "f0831090-28c3-4209-8a4c-359431f6d6ee" 00:10:04.169 ], 00:10:04.169 "product_name": "Raid Volume", 00:10:04.169 "block_size": 512, 00:10:04.169 "num_blocks": 253952, 00:10:04.169 "uuid": "f0831090-28c3-4209-8a4c-359431f6d6ee", 00:10:04.169 "assigned_rate_limits": { 00:10:04.169 "rw_ios_per_sec": 0, 00:10:04.169 "rw_mbytes_per_sec": 0, 00:10:04.169 "r_mbytes_per_sec": 0, 00:10:04.169 "w_mbytes_per_sec": 0 00:10:04.169 }, 00:10:04.169 "claimed": false, 00:10:04.169 "zoned": false, 00:10:04.169 "supported_io_types": { 00:10:04.169 "read": true, 00:10:04.169 "write": true, 00:10:04.169 "unmap": true, 00:10:04.169 "flush": true, 00:10:04.169 "reset": true, 00:10:04.169 "nvme_admin": false, 00:10:04.169 "nvme_io": false, 00:10:04.169 "nvme_io_md": false, 00:10:04.169 "write_zeroes": true, 00:10:04.169 "zcopy": false, 00:10:04.169 "get_zone_info": false, 00:10:04.169 "zone_management": false, 00:10:04.169 "zone_append": false, 00:10:04.169 "compare": false, 00:10:04.169 "compare_and_write": false, 00:10:04.169 "abort": false, 00:10:04.169 "seek_hole": false, 00:10:04.169 "seek_data": false, 00:10:04.169 "copy": false, 00:10:04.169 "nvme_iov_md": false 00:10:04.169 }, 00:10:04.169 "memory_domains": [ 00:10:04.169 { 00:10:04.169 "dma_device_id": "system", 00:10:04.169 "dma_device_type": 1 00:10:04.169 }, 00:10:04.169 { 00:10:04.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.169 "dma_device_type": 2 00:10:04.169 }, 00:10:04.169 { 00:10:04.169 "dma_device_id": "system", 00:10:04.169 "dma_device_type": 1 00:10:04.169 }, 00:10:04.169 { 00:10:04.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.169 "dma_device_type": 2 00:10:04.169 }, 00:10:04.169 { 00:10:04.169 "dma_device_id": "system", 00:10:04.169 "dma_device_type": 1 00:10:04.169 }, 00:10:04.169 { 00:10:04.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.169 "dma_device_type": 2 00:10:04.169 }, 00:10:04.169 { 00:10:04.169 "dma_device_id": "system", 00:10:04.169 "dma_device_type": 1 00:10:04.169 }, 00:10:04.169 { 00:10:04.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.169 "dma_device_type": 2 00:10:04.169 } 00:10:04.169 ], 00:10:04.169 "driver_specific": { 00:10:04.169 "raid": { 00:10:04.169 "uuid": "f0831090-28c3-4209-8a4c-359431f6d6ee", 00:10:04.169 "strip_size_kb": 64, 00:10:04.169 "state": "online", 00:10:04.169 "raid_level": "concat", 00:10:04.169 "superblock": true, 00:10:04.169 "num_base_bdevs": 4, 00:10:04.169 "num_base_bdevs_discovered": 4, 00:10:04.169 "num_base_bdevs_operational": 4, 00:10:04.169 "base_bdevs_list": [ 00:10:04.169 { 00:10:04.169 "name": "NewBaseBdev", 00:10:04.169 "uuid": "8af3980d-fbd5-4ea3-a9d7-5889053402d4", 00:10:04.169 "is_configured": true, 00:10:04.169 "data_offset": 2048, 00:10:04.169 "data_size": 63488 00:10:04.169 }, 00:10:04.169 { 00:10:04.169 "name": "BaseBdev2", 00:10:04.169 "uuid": "0157c93b-9f06-415f-acbf-b27a35a66b3e", 00:10:04.169 "is_configured": true, 00:10:04.169 "data_offset": 2048, 00:10:04.169 "data_size": 63488 00:10:04.169 }, 00:10:04.169 { 00:10:04.169 "name": "BaseBdev3", 00:10:04.169 "uuid": "a9481d0e-1102-472d-964b-c69f242451af", 00:10:04.169 "is_configured": true, 00:10:04.169 "data_offset": 2048, 00:10:04.169 "data_size": 63488 00:10:04.169 }, 00:10:04.169 { 00:10:04.169 "name": "BaseBdev4", 00:10:04.169 "uuid": "9df29ed7-b3d1-4028-a4de-957e20cf13f3", 00:10:04.169 "is_configured": true, 00:10:04.169 "data_offset": 2048, 00:10:04.169 "data_size": 63488 00:10:04.169 } 00:10:04.169 ] 00:10:04.169 } 00:10:04.169 } 00:10:04.169 }' 00:10:04.169 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:04.170 BaseBdev2 00:10:04.170 BaseBdev3 00:10:04.170 BaseBdev4' 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.170 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.430 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.430 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.430 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.430 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:04.430 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.430 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.430 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.430 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.430 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.430 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.430 16:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:04.430 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.430 16:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.430 [2024-11-28 16:22:56.003871] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:04.430 [2024-11-28 16:22:56.003902] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.430 [2024-11-28 16:22:56.003970] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.430 [2024-11-28 16:22:56.004038] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:04.430 [2024-11-28 16:22:56.004049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:04.430 16:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.430 16:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82832 00:10:04.430 16:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82832 ']' 00:10:04.430 16:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 82832 00:10:04.430 16:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:04.430 16:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:04.430 16:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82832 00:10:04.430 16:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:04.430 16:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:04.430 killing process with pid 82832 00:10:04.430 16:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82832' 00:10:04.430 16:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 82832 00:10:04.430 [2024-11-28 16:22:56.051606] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:04.430 16:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 82832 00:10:04.430 [2024-11-28 16:22:56.091385] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:04.690 16:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:04.690 00:10:04.690 real 0m9.341s 00:10:04.690 user 0m15.996s 00:10:04.690 sys 0m1.945s 00:10:04.690 16:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:04.690 16:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.690 ************************************ 00:10:04.690 END TEST raid_state_function_test_sb 00:10:04.690 ************************************ 00:10:04.690 16:22:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:04.690 16:22:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:04.690 16:22:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:04.691 16:22:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:04.691 ************************************ 00:10:04.691 START TEST raid_superblock_test 00:10:04.691 ************************************ 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83475 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83475 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 83475 ']' 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:04.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:04.691 16:22:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.951 [2024-11-28 16:22:56.491580] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:04.951 [2024-11-28 16:22:56.491728] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83475 ] 00:10:04.951 [2024-11-28 16:22:56.650756] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.951 [2024-11-28 16:22:56.696016] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.211 [2024-11-28 16:22:56.737576] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.211 [2024-11-28 16:22:56.737611] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.783 malloc1 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.783 [2024-11-28 16:22:57.331271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:05.783 [2024-11-28 16:22:57.331361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.783 [2024-11-28 16:22:57.331381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:05.783 [2024-11-28 16:22:57.331396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.783 [2024-11-28 16:22:57.333471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.783 [2024-11-28 16:22:57.333511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:05.783 pt1 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.783 malloc2 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.783 [2024-11-28 16:22:57.372356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:05.783 [2024-11-28 16:22:57.372456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.783 [2024-11-28 16:22:57.372490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:05.783 [2024-11-28 16:22:57.372513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.783 [2024-11-28 16:22:57.376989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.783 [2024-11-28 16:22:57.377060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:05.783 pt2 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:05.783 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.784 malloc3 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.784 [2024-11-28 16:22:57.402661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:05.784 [2024-11-28 16:22:57.402712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.784 [2024-11-28 16:22:57.402727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:05.784 [2024-11-28 16:22:57.402736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.784 [2024-11-28 16:22:57.404753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.784 [2024-11-28 16:22:57.404791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:05.784 pt3 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.784 malloc4 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.784 [2024-11-28 16:22:57.430921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:05.784 [2024-11-28 16:22:57.430970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.784 [2024-11-28 16:22:57.431001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:05.784 [2024-11-28 16:22:57.431012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.784 [2024-11-28 16:22:57.432983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.784 [2024-11-28 16:22:57.433030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:05.784 pt4 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.784 [2024-11-28 16:22:57.442990] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:05.784 [2024-11-28 16:22:57.444742] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:05.784 [2024-11-28 16:22:57.444802] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:05.784 [2024-11-28 16:22:57.444871] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:05.784 [2024-11-28 16:22:57.445025] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:05.784 [2024-11-28 16:22:57.445040] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:05.784 [2024-11-28 16:22:57.445281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:05.784 [2024-11-28 16:22:57.445419] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:05.784 [2024-11-28 16:22:57.445434] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:05.784 [2024-11-28 16:22:57.445543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.784 "name": "raid_bdev1", 00:10:05.784 "uuid": "e3b39486-8d63-4a9c-ae2c-8dd5f23daa0f", 00:10:05.784 "strip_size_kb": 64, 00:10:05.784 "state": "online", 00:10:05.784 "raid_level": "concat", 00:10:05.784 "superblock": true, 00:10:05.784 "num_base_bdevs": 4, 00:10:05.784 "num_base_bdevs_discovered": 4, 00:10:05.784 "num_base_bdevs_operational": 4, 00:10:05.784 "base_bdevs_list": [ 00:10:05.784 { 00:10:05.784 "name": "pt1", 00:10:05.784 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:05.784 "is_configured": true, 00:10:05.784 "data_offset": 2048, 00:10:05.784 "data_size": 63488 00:10:05.784 }, 00:10:05.784 { 00:10:05.784 "name": "pt2", 00:10:05.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:05.784 "is_configured": true, 00:10:05.784 "data_offset": 2048, 00:10:05.784 "data_size": 63488 00:10:05.784 }, 00:10:05.784 { 00:10:05.784 "name": "pt3", 00:10:05.784 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:05.784 "is_configured": true, 00:10:05.784 "data_offset": 2048, 00:10:05.784 "data_size": 63488 00:10:05.784 }, 00:10:05.784 { 00:10:05.784 "name": "pt4", 00:10:05.784 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:05.784 "is_configured": true, 00:10:05.784 "data_offset": 2048, 00:10:05.784 "data_size": 63488 00:10:05.784 } 00:10:05.784 ] 00:10:05.784 }' 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.784 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.355 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:06.355 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:06.355 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:06.355 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:06.355 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:06.355 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:06.355 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:06.355 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.355 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.355 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:06.355 [2024-11-28 16:22:57.886526] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.355 16:22:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.355 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:06.355 "name": "raid_bdev1", 00:10:06.355 "aliases": [ 00:10:06.355 "e3b39486-8d63-4a9c-ae2c-8dd5f23daa0f" 00:10:06.355 ], 00:10:06.355 "product_name": "Raid Volume", 00:10:06.355 "block_size": 512, 00:10:06.355 "num_blocks": 253952, 00:10:06.355 "uuid": "e3b39486-8d63-4a9c-ae2c-8dd5f23daa0f", 00:10:06.355 "assigned_rate_limits": { 00:10:06.355 "rw_ios_per_sec": 0, 00:10:06.355 "rw_mbytes_per_sec": 0, 00:10:06.355 "r_mbytes_per_sec": 0, 00:10:06.355 "w_mbytes_per_sec": 0 00:10:06.355 }, 00:10:06.355 "claimed": false, 00:10:06.355 "zoned": false, 00:10:06.355 "supported_io_types": { 00:10:06.355 "read": true, 00:10:06.355 "write": true, 00:10:06.355 "unmap": true, 00:10:06.355 "flush": true, 00:10:06.355 "reset": true, 00:10:06.355 "nvme_admin": false, 00:10:06.355 "nvme_io": false, 00:10:06.355 "nvme_io_md": false, 00:10:06.355 "write_zeroes": true, 00:10:06.355 "zcopy": false, 00:10:06.355 "get_zone_info": false, 00:10:06.355 "zone_management": false, 00:10:06.355 "zone_append": false, 00:10:06.355 "compare": false, 00:10:06.355 "compare_and_write": false, 00:10:06.355 "abort": false, 00:10:06.355 "seek_hole": false, 00:10:06.355 "seek_data": false, 00:10:06.355 "copy": false, 00:10:06.355 "nvme_iov_md": false 00:10:06.355 }, 00:10:06.355 "memory_domains": [ 00:10:06.355 { 00:10:06.355 "dma_device_id": "system", 00:10:06.355 "dma_device_type": 1 00:10:06.355 }, 00:10:06.355 { 00:10:06.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.355 "dma_device_type": 2 00:10:06.355 }, 00:10:06.355 { 00:10:06.355 "dma_device_id": "system", 00:10:06.355 "dma_device_type": 1 00:10:06.355 }, 00:10:06.355 { 00:10:06.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.355 "dma_device_type": 2 00:10:06.355 }, 00:10:06.355 { 00:10:06.355 "dma_device_id": "system", 00:10:06.355 "dma_device_type": 1 00:10:06.355 }, 00:10:06.355 { 00:10:06.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.355 "dma_device_type": 2 00:10:06.355 }, 00:10:06.355 { 00:10:06.355 "dma_device_id": "system", 00:10:06.355 "dma_device_type": 1 00:10:06.355 }, 00:10:06.355 { 00:10:06.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.355 "dma_device_type": 2 00:10:06.355 } 00:10:06.355 ], 00:10:06.355 "driver_specific": { 00:10:06.355 "raid": { 00:10:06.355 "uuid": "e3b39486-8d63-4a9c-ae2c-8dd5f23daa0f", 00:10:06.355 "strip_size_kb": 64, 00:10:06.355 "state": "online", 00:10:06.355 "raid_level": "concat", 00:10:06.355 "superblock": true, 00:10:06.355 "num_base_bdevs": 4, 00:10:06.355 "num_base_bdevs_discovered": 4, 00:10:06.355 "num_base_bdevs_operational": 4, 00:10:06.355 "base_bdevs_list": [ 00:10:06.355 { 00:10:06.355 "name": "pt1", 00:10:06.355 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:06.355 "is_configured": true, 00:10:06.356 "data_offset": 2048, 00:10:06.356 "data_size": 63488 00:10:06.356 }, 00:10:06.356 { 00:10:06.356 "name": "pt2", 00:10:06.356 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:06.356 "is_configured": true, 00:10:06.356 "data_offset": 2048, 00:10:06.356 "data_size": 63488 00:10:06.356 }, 00:10:06.356 { 00:10:06.356 "name": "pt3", 00:10:06.356 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:06.356 "is_configured": true, 00:10:06.356 "data_offset": 2048, 00:10:06.356 "data_size": 63488 00:10:06.356 }, 00:10:06.356 { 00:10:06.356 "name": "pt4", 00:10:06.356 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:06.356 "is_configured": true, 00:10:06.356 "data_offset": 2048, 00:10:06.356 "data_size": 63488 00:10:06.356 } 00:10:06.356 ] 00:10:06.356 } 00:10:06.356 } 00:10:06.356 }' 00:10:06.356 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:06.356 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:06.356 pt2 00:10:06.356 pt3 00:10:06.356 pt4' 00:10:06.356 16:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.356 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:06.356 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.356 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:06.356 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.356 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.356 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.356 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.356 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.356 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.356 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.356 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:06.356 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.356 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.356 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.356 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.356 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.356 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.356 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.356 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:06.356 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.356 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.356 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.617 [2024-11-28 16:22:58.229963] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e3b39486-8d63-4a9c-ae2c-8dd5f23daa0f 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e3b39486-8d63-4a9c-ae2c-8dd5f23daa0f ']' 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.617 [2024-11-28 16:22:58.261600] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:06.617 [2024-11-28 16:22:58.261676] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.617 [2024-11-28 16:22:58.261746] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.617 [2024-11-28 16:22:58.261824] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.617 [2024-11-28 16:22:58.261852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:06.617 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:06.618 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.618 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.618 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.618 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:06.618 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:06.618 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.618 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.618 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.618 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:06.618 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:06.618 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.618 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.618 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.618 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:06.618 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:06.618 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.618 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.878 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.878 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:06.878 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:06.878 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:06.878 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:06.878 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.879 [2024-11-28 16:22:58.425362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:06.879 [2024-11-28 16:22:58.427295] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:06.879 [2024-11-28 16:22:58.427383] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:06.879 [2024-11-28 16:22:58.427430] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:06.879 [2024-11-28 16:22:58.427518] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:06.879 [2024-11-28 16:22:58.427622] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:06.879 [2024-11-28 16:22:58.427693] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:06.879 [2024-11-28 16:22:58.427747] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:06.879 [2024-11-28 16:22:58.427810] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:06.879 [2024-11-28 16:22:58.427841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:06.879 request: 00:10:06.879 { 00:10:06.879 "name": "raid_bdev1", 00:10:06.879 "raid_level": "concat", 00:10:06.879 "base_bdevs": [ 00:10:06.879 "malloc1", 00:10:06.879 "malloc2", 00:10:06.879 "malloc3", 00:10:06.879 "malloc4" 00:10:06.879 ], 00:10:06.879 "strip_size_kb": 64, 00:10:06.879 "superblock": false, 00:10:06.879 "method": "bdev_raid_create", 00:10:06.879 "req_id": 1 00:10:06.879 } 00:10:06.879 Got JSON-RPC error response 00:10:06.879 response: 00:10:06.879 { 00:10:06.879 "code": -17, 00:10:06.879 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:06.879 } 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.879 [2024-11-28 16:22:58.489210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:06.879 [2024-11-28 16:22:58.489295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.879 [2024-11-28 16:22:58.489346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:06.879 [2024-11-28 16:22:58.489375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.879 [2024-11-28 16:22:58.491438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.879 [2024-11-28 16:22:58.491503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:06.879 [2024-11-28 16:22:58.491609] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:06.879 [2024-11-28 16:22:58.491681] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:06.879 pt1 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.879 "name": "raid_bdev1", 00:10:06.879 "uuid": "e3b39486-8d63-4a9c-ae2c-8dd5f23daa0f", 00:10:06.879 "strip_size_kb": 64, 00:10:06.879 "state": "configuring", 00:10:06.879 "raid_level": "concat", 00:10:06.879 "superblock": true, 00:10:06.879 "num_base_bdevs": 4, 00:10:06.879 "num_base_bdevs_discovered": 1, 00:10:06.879 "num_base_bdevs_operational": 4, 00:10:06.879 "base_bdevs_list": [ 00:10:06.879 { 00:10:06.879 "name": "pt1", 00:10:06.879 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:06.879 "is_configured": true, 00:10:06.879 "data_offset": 2048, 00:10:06.879 "data_size": 63488 00:10:06.879 }, 00:10:06.879 { 00:10:06.879 "name": null, 00:10:06.879 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:06.879 "is_configured": false, 00:10:06.879 "data_offset": 2048, 00:10:06.879 "data_size": 63488 00:10:06.879 }, 00:10:06.879 { 00:10:06.879 "name": null, 00:10:06.879 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:06.879 "is_configured": false, 00:10:06.879 "data_offset": 2048, 00:10:06.879 "data_size": 63488 00:10:06.879 }, 00:10:06.879 { 00:10:06.879 "name": null, 00:10:06.879 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:06.879 "is_configured": false, 00:10:06.879 "data_offset": 2048, 00:10:06.879 "data_size": 63488 00:10:06.879 } 00:10:06.879 ] 00:10:06.879 }' 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.879 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.455 [2024-11-28 16:22:58.916514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:07.455 [2024-11-28 16:22:58.916575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.455 [2024-11-28 16:22:58.916594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:07.455 [2024-11-28 16:22:58.916604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.455 [2024-11-28 16:22:58.917022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.455 [2024-11-28 16:22:58.917076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:07.455 [2024-11-28 16:22:58.917148] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:07.455 [2024-11-28 16:22:58.917168] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:07.455 pt2 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.455 [2024-11-28 16:22:58.928503] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.455 "name": "raid_bdev1", 00:10:07.455 "uuid": "e3b39486-8d63-4a9c-ae2c-8dd5f23daa0f", 00:10:07.455 "strip_size_kb": 64, 00:10:07.455 "state": "configuring", 00:10:07.455 "raid_level": "concat", 00:10:07.455 "superblock": true, 00:10:07.455 "num_base_bdevs": 4, 00:10:07.455 "num_base_bdevs_discovered": 1, 00:10:07.455 "num_base_bdevs_operational": 4, 00:10:07.455 "base_bdevs_list": [ 00:10:07.455 { 00:10:07.455 "name": "pt1", 00:10:07.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:07.455 "is_configured": true, 00:10:07.455 "data_offset": 2048, 00:10:07.455 "data_size": 63488 00:10:07.455 }, 00:10:07.455 { 00:10:07.455 "name": null, 00:10:07.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:07.455 "is_configured": false, 00:10:07.455 "data_offset": 0, 00:10:07.455 "data_size": 63488 00:10:07.455 }, 00:10:07.455 { 00:10:07.455 "name": null, 00:10:07.455 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:07.455 "is_configured": false, 00:10:07.455 "data_offset": 2048, 00:10:07.455 "data_size": 63488 00:10:07.455 }, 00:10:07.455 { 00:10:07.455 "name": null, 00:10:07.455 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:07.455 "is_configured": false, 00:10:07.455 "data_offset": 2048, 00:10:07.455 "data_size": 63488 00:10:07.455 } 00:10:07.455 ] 00:10:07.455 }' 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.455 16:22:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.724 [2024-11-28 16:22:59.403763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:07.724 [2024-11-28 16:22:59.403891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.724 [2024-11-28 16:22:59.403913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:07.724 [2024-11-28 16:22:59.403924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.724 [2024-11-28 16:22:59.404302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.724 [2024-11-28 16:22:59.404328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:07.724 [2024-11-28 16:22:59.404396] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:07.724 [2024-11-28 16:22:59.404418] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:07.724 pt2 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.724 [2024-11-28 16:22:59.415700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:07.724 [2024-11-28 16:22:59.415809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.724 [2024-11-28 16:22:59.415829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:07.724 [2024-11-28 16:22:59.415839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.724 [2024-11-28 16:22:59.416173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.724 [2024-11-28 16:22:59.416199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:07.724 [2024-11-28 16:22:59.416255] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:07.724 [2024-11-28 16:22:59.416274] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:07.724 pt3 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.724 [2024-11-28 16:22:59.427696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:07.724 [2024-11-28 16:22:59.427748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.724 [2024-11-28 16:22:59.427763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:07.724 [2024-11-28 16:22:59.427772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.724 [2024-11-28 16:22:59.428091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.724 [2024-11-28 16:22:59.428109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:07.724 [2024-11-28 16:22:59.428155] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:07.724 [2024-11-28 16:22:59.428174] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:07.724 [2024-11-28 16:22:59.428263] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:07.724 [2024-11-28 16:22:59.428284] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:07.724 [2024-11-28 16:22:59.428496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:07.724 [2024-11-28 16:22:59.428605] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:07.724 [2024-11-28 16:22:59.428619] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:07.724 [2024-11-28 16:22:59.428711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.724 pt4 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.724 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.724 "name": "raid_bdev1", 00:10:07.724 "uuid": "e3b39486-8d63-4a9c-ae2c-8dd5f23daa0f", 00:10:07.724 "strip_size_kb": 64, 00:10:07.724 "state": "online", 00:10:07.724 "raid_level": "concat", 00:10:07.724 "superblock": true, 00:10:07.724 "num_base_bdevs": 4, 00:10:07.725 "num_base_bdevs_discovered": 4, 00:10:07.725 "num_base_bdevs_operational": 4, 00:10:07.725 "base_bdevs_list": [ 00:10:07.725 { 00:10:07.725 "name": "pt1", 00:10:07.725 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:07.725 "is_configured": true, 00:10:07.725 "data_offset": 2048, 00:10:07.725 "data_size": 63488 00:10:07.725 }, 00:10:07.725 { 00:10:07.725 "name": "pt2", 00:10:07.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:07.725 "is_configured": true, 00:10:07.725 "data_offset": 2048, 00:10:07.725 "data_size": 63488 00:10:07.725 }, 00:10:07.725 { 00:10:07.725 "name": "pt3", 00:10:07.725 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:07.725 "is_configured": true, 00:10:07.725 "data_offset": 2048, 00:10:07.725 "data_size": 63488 00:10:07.725 }, 00:10:07.725 { 00:10:07.725 "name": "pt4", 00:10:07.725 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:07.725 "is_configured": true, 00:10:07.725 "data_offset": 2048, 00:10:07.725 "data_size": 63488 00:10:07.725 } 00:10:07.725 ] 00:10:07.725 }' 00:10:07.725 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.725 16:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.295 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:08.295 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:08.295 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:08.295 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:08.295 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:08.295 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:08.295 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:08.295 16:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.295 16:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.295 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:08.295 [2024-11-28 16:22:59.879223] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.295 16:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.295 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:08.295 "name": "raid_bdev1", 00:10:08.295 "aliases": [ 00:10:08.295 "e3b39486-8d63-4a9c-ae2c-8dd5f23daa0f" 00:10:08.295 ], 00:10:08.295 "product_name": "Raid Volume", 00:10:08.295 "block_size": 512, 00:10:08.295 "num_blocks": 253952, 00:10:08.295 "uuid": "e3b39486-8d63-4a9c-ae2c-8dd5f23daa0f", 00:10:08.295 "assigned_rate_limits": { 00:10:08.295 "rw_ios_per_sec": 0, 00:10:08.295 "rw_mbytes_per_sec": 0, 00:10:08.295 "r_mbytes_per_sec": 0, 00:10:08.295 "w_mbytes_per_sec": 0 00:10:08.295 }, 00:10:08.295 "claimed": false, 00:10:08.295 "zoned": false, 00:10:08.295 "supported_io_types": { 00:10:08.295 "read": true, 00:10:08.295 "write": true, 00:10:08.295 "unmap": true, 00:10:08.295 "flush": true, 00:10:08.295 "reset": true, 00:10:08.295 "nvme_admin": false, 00:10:08.295 "nvme_io": false, 00:10:08.295 "nvme_io_md": false, 00:10:08.295 "write_zeroes": true, 00:10:08.295 "zcopy": false, 00:10:08.295 "get_zone_info": false, 00:10:08.295 "zone_management": false, 00:10:08.295 "zone_append": false, 00:10:08.295 "compare": false, 00:10:08.295 "compare_and_write": false, 00:10:08.295 "abort": false, 00:10:08.295 "seek_hole": false, 00:10:08.295 "seek_data": false, 00:10:08.295 "copy": false, 00:10:08.295 "nvme_iov_md": false 00:10:08.295 }, 00:10:08.295 "memory_domains": [ 00:10:08.295 { 00:10:08.295 "dma_device_id": "system", 00:10:08.295 "dma_device_type": 1 00:10:08.295 }, 00:10:08.295 { 00:10:08.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.295 "dma_device_type": 2 00:10:08.295 }, 00:10:08.295 { 00:10:08.295 "dma_device_id": "system", 00:10:08.295 "dma_device_type": 1 00:10:08.295 }, 00:10:08.295 { 00:10:08.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.295 "dma_device_type": 2 00:10:08.295 }, 00:10:08.295 { 00:10:08.295 "dma_device_id": "system", 00:10:08.295 "dma_device_type": 1 00:10:08.295 }, 00:10:08.295 { 00:10:08.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.295 "dma_device_type": 2 00:10:08.295 }, 00:10:08.295 { 00:10:08.295 "dma_device_id": "system", 00:10:08.295 "dma_device_type": 1 00:10:08.295 }, 00:10:08.295 { 00:10:08.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.295 "dma_device_type": 2 00:10:08.295 } 00:10:08.295 ], 00:10:08.295 "driver_specific": { 00:10:08.295 "raid": { 00:10:08.295 "uuid": "e3b39486-8d63-4a9c-ae2c-8dd5f23daa0f", 00:10:08.296 "strip_size_kb": 64, 00:10:08.296 "state": "online", 00:10:08.296 "raid_level": "concat", 00:10:08.296 "superblock": true, 00:10:08.296 "num_base_bdevs": 4, 00:10:08.296 "num_base_bdevs_discovered": 4, 00:10:08.296 "num_base_bdevs_operational": 4, 00:10:08.296 "base_bdevs_list": [ 00:10:08.296 { 00:10:08.296 "name": "pt1", 00:10:08.296 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:08.296 "is_configured": true, 00:10:08.296 "data_offset": 2048, 00:10:08.296 "data_size": 63488 00:10:08.296 }, 00:10:08.296 { 00:10:08.296 "name": "pt2", 00:10:08.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:08.296 "is_configured": true, 00:10:08.296 "data_offset": 2048, 00:10:08.296 "data_size": 63488 00:10:08.296 }, 00:10:08.296 { 00:10:08.296 "name": "pt3", 00:10:08.296 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:08.296 "is_configured": true, 00:10:08.296 "data_offset": 2048, 00:10:08.296 "data_size": 63488 00:10:08.296 }, 00:10:08.296 { 00:10:08.296 "name": "pt4", 00:10:08.296 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:08.296 "is_configured": true, 00:10:08.296 "data_offset": 2048, 00:10:08.296 "data_size": 63488 00:10:08.296 } 00:10:08.296 ] 00:10:08.296 } 00:10:08.296 } 00:10:08.296 }' 00:10:08.296 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:08.296 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:08.296 pt2 00:10:08.296 pt3 00:10:08.296 pt4' 00:10:08.296 16:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.296 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:08.296 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.296 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.296 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:08.296 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.296 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.296 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.556 [2024-11-28 16:23:00.234561] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e3b39486-8d63-4a9c-ae2c-8dd5f23daa0f '!=' e3b39486-8d63-4a9c-ae2c-8dd5f23daa0f ']' 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83475 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 83475 ']' 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 83475 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83475 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83475' 00:10:08.556 killing process with pid 83475 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 83475 00:10:08.556 [2024-11-28 16:23:00.317668] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:08.556 [2024-11-28 16:23:00.317756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.556 [2024-11-28 16:23:00.317823] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:08.556 [2024-11-28 16:23:00.317852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:08.556 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 83475 00:10:08.816 [2024-11-28 16:23:00.360404] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:09.076 16:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:09.076 00:10:09.076 real 0m4.192s 00:10:09.076 user 0m6.655s 00:10:09.076 sys 0m0.867s 00:10:09.076 ************************************ 00:10:09.076 END TEST raid_superblock_test 00:10:09.076 ************************************ 00:10:09.076 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:09.076 16:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.076 16:23:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:09.076 16:23:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:09.076 16:23:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:09.076 16:23:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:09.076 ************************************ 00:10:09.076 START TEST raid_read_error_test 00:10:09.076 ************************************ 00:10:09.076 16:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:10:09.076 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:09.076 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:09.076 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:09.076 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:09.076 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:09.076 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:09.076 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:09.076 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:09.076 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:09.076 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:09.076 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:09.076 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:09.076 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:09.076 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:09.076 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:09.076 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:09.076 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:09.076 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:09.076 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:09.076 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:09.077 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:09.077 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:09.077 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:09.077 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:09.077 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:09.077 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:09.077 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:09.077 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:09.077 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5m5r1IAR55 00:10:09.077 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83723 00:10:09.077 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:09.077 16:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83723 00:10:09.077 16:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 83723 ']' 00:10:09.077 16:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.077 16:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:09.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.077 16:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.077 16:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:09.077 16:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.077 [2024-11-28 16:23:00.781246] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:09.077 [2024-11-28 16:23:00.781445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83723 ] 00:10:09.337 [2024-11-28 16:23:00.941719] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.337 [2024-11-28 16:23:00.985663] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.337 [2024-11-28 16:23:01.027118] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.337 [2024-11-28 16:23:01.027229] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.906 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:09.906 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:09.906 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:09.906 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:09.906 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.906 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.906 BaseBdev1_malloc 00:10:09.906 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.906 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:09.906 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.906 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.906 true 00:10:09.906 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.906 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:09.906 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.906 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.906 [2024-11-28 16:23:01.644604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:09.906 [2024-11-28 16:23:01.644663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.906 [2024-11-28 16:23:01.644700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:09.906 [2024-11-28 16:23:01.644709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.906 [2024-11-28 16:23:01.646924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.906 [2024-11-28 16:23:01.646958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:09.906 BaseBdev1 00:10:09.906 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.906 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:09.906 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:09.906 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.906 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.166 BaseBdev2_malloc 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.166 true 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.166 [2024-11-28 16:23:01.702621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:10.166 [2024-11-28 16:23:01.702693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.166 [2024-11-28 16:23:01.702720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:10.166 [2024-11-28 16:23:01.702734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.166 [2024-11-28 16:23:01.705915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.166 [2024-11-28 16:23:01.705966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:10.166 BaseBdev2 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.166 BaseBdev3_malloc 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.166 true 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.166 [2024-11-28 16:23:01.743243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:10.166 [2024-11-28 16:23:01.743326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.166 [2024-11-28 16:23:01.743364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:10.166 [2024-11-28 16:23:01.743373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.166 [2024-11-28 16:23:01.745347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.166 [2024-11-28 16:23:01.745393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:10.166 BaseBdev3 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.166 BaseBdev4_malloc 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.166 true 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.166 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.166 [2024-11-28 16:23:01.783442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:10.166 [2024-11-28 16:23:01.783483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.167 [2024-11-28 16:23:01.783518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:10.167 [2024-11-28 16:23:01.783525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.167 [2024-11-28 16:23:01.785468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.167 [2024-11-28 16:23:01.785555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:10.167 BaseBdev4 00:10:10.167 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.167 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:10.167 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.167 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.167 [2024-11-28 16:23:01.795468] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.167 [2024-11-28 16:23:01.797266] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.167 [2024-11-28 16:23:01.797349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:10.167 [2024-11-28 16:23:01.797398] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:10.167 [2024-11-28 16:23:01.797584] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:10.167 [2024-11-28 16:23:01.797596] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:10.167 [2024-11-28 16:23:01.797820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:10.167 [2024-11-28 16:23:01.797978] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:10.167 [2024-11-28 16:23:01.797991] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:10.167 [2024-11-28 16:23:01.798095] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.167 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.167 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:10.167 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.167 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.167 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:10.167 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.167 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.167 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.167 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.167 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.167 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.167 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.167 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.167 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.167 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.167 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.167 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.167 "name": "raid_bdev1", 00:10:10.167 "uuid": "f698f5af-6b9c-4646-9628-21394ff862a8", 00:10:10.167 "strip_size_kb": 64, 00:10:10.167 "state": "online", 00:10:10.167 "raid_level": "concat", 00:10:10.167 "superblock": true, 00:10:10.167 "num_base_bdevs": 4, 00:10:10.167 "num_base_bdevs_discovered": 4, 00:10:10.167 "num_base_bdevs_operational": 4, 00:10:10.167 "base_bdevs_list": [ 00:10:10.167 { 00:10:10.167 "name": "BaseBdev1", 00:10:10.167 "uuid": "84f984a9-21f2-57b9-b7ea-69d3069167b4", 00:10:10.167 "is_configured": true, 00:10:10.167 "data_offset": 2048, 00:10:10.167 "data_size": 63488 00:10:10.167 }, 00:10:10.167 { 00:10:10.167 "name": "BaseBdev2", 00:10:10.167 "uuid": "df59efc7-4e14-5ed8-8297-1343ed5158f4", 00:10:10.167 "is_configured": true, 00:10:10.167 "data_offset": 2048, 00:10:10.167 "data_size": 63488 00:10:10.167 }, 00:10:10.167 { 00:10:10.167 "name": "BaseBdev3", 00:10:10.167 "uuid": "365f5ef4-8037-549d-9475-a9a35e6f158f", 00:10:10.167 "is_configured": true, 00:10:10.167 "data_offset": 2048, 00:10:10.167 "data_size": 63488 00:10:10.167 }, 00:10:10.167 { 00:10:10.167 "name": "BaseBdev4", 00:10:10.167 "uuid": "b84fdf24-a994-550f-af46-b9a7f4bb82a9", 00:10:10.167 "is_configured": true, 00:10:10.167 "data_offset": 2048, 00:10:10.167 "data_size": 63488 00:10:10.167 } 00:10:10.167 ] 00:10:10.167 }' 00:10:10.167 16:23:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.167 16:23:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.428 16:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:10.428 16:23:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:10.688 [2024-11-28 16:23:02.287002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:11.629 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:11.629 16:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.629 16:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.629 16:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.629 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:11.629 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:11.629 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:11.629 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:11.629 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.629 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.629 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.629 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.629 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.629 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.629 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.629 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.629 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.629 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.630 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.630 16:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.630 16:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.630 16:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.630 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.630 "name": "raid_bdev1", 00:10:11.630 "uuid": "f698f5af-6b9c-4646-9628-21394ff862a8", 00:10:11.630 "strip_size_kb": 64, 00:10:11.630 "state": "online", 00:10:11.630 "raid_level": "concat", 00:10:11.630 "superblock": true, 00:10:11.630 "num_base_bdevs": 4, 00:10:11.630 "num_base_bdevs_discovered": 4, 00:10:11.630 "num_base_bdevs_operational": 4, 00:10:11.630 "base_bdevs_list": [ 00:10:11.630 { 00:10:11.630 "name": "BaseBdev1", 00:10:11.630 "uuid": "84f984a9-21f2-57b9-b7ea-69d3069167b4", 00:10:11.630 "is_configured": true, 00:10:11.630 "data_offset": 2048, 00:10:11.630 "data_size": 63488 00:10:11.630 }, 00:10:11.630 { 00:10:11.630 "name": "BaseBdev2", 00:10:11.630 "uuid": "df59efc7-4e14-5ed8-8297-1343ed5158f4", 00:10:11.630 "is_configured": true, 00:10:11.630 "data_offset": 2048, 00:10:11.630 "data_size": 63488 00:10:11.630 }, 00:10:11.630 { 00:10:11.630 "name": "BaseBdev3", 00:10:11.630 "uuid": "365f5ef4-8037-549d-9475-a9a35e6f158f", 00:10:11.630 "is_configured": true, 00:10:11.630 "data_offset": 2048, 00:10:11.630 "data_size": 63488 00:10:11.630 }, 00:10:11.630 { 00:10:11.630 "name": "BaseBdev4", 00:10:11.630 "uuid": "b84fdf24-a994-550f-af46-b9a7f4bb82a9", 00:10:11.630 "is_configured": true, 00:10:11.630 "data_offset": 2048, 00:10:11.630 "data_size": 63488 00:10:11.630 } 00:10:11.630 ] 00:10:11.630 }' 00:10:11.630 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.630 16:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.890 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:11.890 16:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.890 16:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.149 [2024-11-28 16:23:03.662546] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:12.149 [2024-11-28 16:23:03.662643] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:12.149 [2024-11-28 16:23:03.665091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.149 [2024-11-28 16:23:03.665180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.149 [2024-11-28 16:23:03.665241] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:12.149 [2024-11-28 16:23:03.665293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:12.149 { 00:10:12.149 "results": [ 00:10:12.149 { 00:10:12.149 "job": "raid_bdev1", 00:10:12.149 "core_mask": "0x1", 00:10:12.149 "workload": "randrw", 00:10:12.149 "percentage": 50, 00:10:12.149 "status": "finished", 00:10:12.149 "queue_depth": 1, 00:10:12.149 "io_size": 131072, 00:10:12.149 "runtime": 1.376532, 00:10:12.149 "iops": 16976.72120953236, 00:10:12.149 "mibps": 2122.090151191545, 00:10:12.149 "io_failed": 1, 00:10:12.149 "io_timeout": 0, 00:10:12.149 "avg_latency_us": 81.73918370321373, 00:10:12.149 "min_latency_us": 24.705676855895195, 00:10:12.149 "max_latency_us": 1352.216593886463 00:10:12.149 } 00:10:12.149 ], 00:10:12.149 "core_count": 1 00:10:12.149 } 00:10:12.149 16:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.149 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83723 00:10:12.149 16:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 83723 ']' 00:10:12.149 16:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 83723 00:10:12.149 16:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:12.149 16:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:12.149 16:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83723 00:10:12.149 16:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:12.149 16:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:12.149 16:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83723' 00:10:12.149 killing process with pid 83723 00:10:12.149 16:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 83723 00:10:12.149 [2024-11-28 16:23:03.714500] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:12.149 16:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 83723 00:10:12.149 [2024-11-28 16:23:03.749314] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:12.409 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5m5r1IAR55 00:10:12.409 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:12.409 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:12.409 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:12.409 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:12.409 ************************************ 00:10:12.409 END TEST raid_read_error_test 00:10:12.409 ************************************ 00:10:12.409 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:12.409 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:12.409 16:23:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:12.409 00:10:12.409 real 0m3.316s 00:10:12.409 user 0m4.114s 00:10:12.409 sys 0m0.581s 00:10:12.409 16:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:12.409 16:23:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.409 16:23:04 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:10:12.409 16:23:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:12.409 16:23:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:12.409 16:23:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:12.409 ************************************ 00:10:12.409 START TEST raid_write_error_test 00:10:12.409 ************************************ 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:12.409 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:12.410 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:12.410 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:12.410 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dqiIxY32m9 00:10:12.410 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83852 00:10:12.410 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:12.410 16:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83852 00:10:12.410 16:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 83852 ']' 00:10:12.410 16:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.410 16:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.410 16:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.410 16:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.410 16:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.410 [2024-11-28 16:23:04.168513] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:12.410 [2024-11-28 16:23:04.168738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83852 ] 00:10:12.669 [2024-11-28 16:23:04.312438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.669 [2024-11-28 16:23:04.357298] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.669 [2024-11-28 16:23:04.398684] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.669 [2024-11-28 16:23:04.398795] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.239 16:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:13.239 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:13.239 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:13.239 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:13.239 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.239 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.499 BaseBdev1_malloc 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.499 true 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.499 [2024-11-28 16:23:05.040122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:13.499 [2024-11-28 16:23:05.040174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.499 [2024-11-28 16:23:05.040193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:13.499 [2024-11-28 16:23:05.040202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.499 [2024-11-28 16:23:05.042253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.499 [2024-11-28 16:23:05.042289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:13.499 BaseBdev1 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.499 BaseBdev2_malloc 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.499 true 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.499 [2024-11-28 16:23:05.091615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:13.499 [2024-11-28 16:23:05.091735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.499 [2024-11-28 16:23:05.091760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:13.499 [2024-11-28 16:23:05.091769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.499 [2024-11-28 16:23:05.093780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.499 [2024-11-28 16:23:05.093816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:13.499 BaseBdev2 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.499 BaseBdev3_malloc 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.499 true 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.499 [2024-11-28 16:23:05.132080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:13.499 [2024-11-28 16:23:05.132131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.499 [2024-11-28 16:23:05.132150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:13.499 [2024-11-28 16:23:05.132158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.499 [2024-11-28 16:23:05.134146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.499 [2024-11-28 16:23:05.134226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:13.499 BaseBdev3 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.499 BaseBdev4_malloc 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.499 true 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.499 [2024-11-28 16:23:05.172457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:13.499 [2024-11-28 16:23:05.172506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.499 [2024-11-28 16:23:05.172527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:13.499 [2024-11-28 16:23:05.172536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.499 [2024-11-28 16:23:05.174490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.499 [2024-11-28 16:23:05.174526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:13.499 BaseBdev4 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.499 [2024-11-28 16:23:05.184489] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.499 [2024-11-28 16:23:05.186293] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.499 [2024-11-28 16:23:05.186412] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:13.499 [2024-11-28 16:23:05.186502] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:13.499 [2024-11-28 16:23:05.186727] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:13.499 [2024-11-28 16:23:05.186777] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:13.499 [2024-11-28 16:23:05.187044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:13.499 [2024-11-28 16:23:05.187212] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:13.499 [2024-11-28 16:23:05.187254] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:13.499 [2024-11-28 16:23:05.187406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.499 "name": "raid_bdev1", 00:10:13.499 "uuid": "8ccf8a87-8118-479b-a8c4-24485b8e07a4", 00:10:13.499 "strip_size_kb": 64, 00:10:13.499 "state": "online", 00:10:13.499 "raid_level": "concat", 00:10:13.499 "superblock": true, 00:10:13.499 "num_base_bdevs": 4, 00:10:13.499 "num_base_bdevs_discovered": 4, 00:10:13.499 "num_base_bdevs_operational": 4, 00:10:13.499 "base_bdevs_list": [ 00:10:13.499 { 00:10:13.499 "name": "BaseBdev1", 00:10:13.499 "uuid": "daa345d5-4625-557d-89a7-4b7132287085", 00:10:13.499 "is_configured": true, 00:10:13.499 "data_offset": 2048, 00:10:13.499 "data_size": 63488 00:10:13.499 }, 00:10:13.499 { 00:10:13.499 "name": "BaseBdev2", 00:10:13.499 "uuid": "0df5d90a-3c0b-57f3-9cef-fd40ecf3c9b2", 00:10:13.499 "is_configured": true, 00:10:13.499 "data_offset": 2048, 00:10:13.499 "data_size": 63488 00:10:13.499 }, 00:10:13.499 { 00:10:13.499 "name": "BaseBdev3", 00:10:13.499 "uuid": "fb7f649a-b555-59ef-a24f-52ca20079402", 00:10:13.499 "is_configured": true, 00:10:13.499 "data_offset": 2048, 00:10:13.499 "data_size": 63488 00:10:13.499 }, 00:10:13.499 { 00:10:13.499 "name": "BaseBdev4", 00:10:13.499 "uuid": "179d942f-82fe-508f-9449-1987983e1e02", 00:10:13.499 "is_configured": true, 00:10:13.499 "data_offset": 2048, 00:10:13.499 "data_size": 63488 00:10:13.499 } 00:10:13.499 ] 00:10:13.499 }' 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.499 16:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.068 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:14.068 16:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:14.068 [2024-11-28 16:23:05.752057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.008 "name": "raid_bdev1", 00:10:15.008 "uuid": "8ccf8a87-8118-479b-a8c4-24485b8e07a4", 00:10:15.008 "strip_size_kb": 64, 00:10:15.008 "state": "online", 00:10:15.008 "raid_level": "concat", 00:10:15.008 "superblock": true, 00:10:15.008 "num_base_bdevs": 4, 00:10:15.008 "num_base_bdevs_discovered": 4, 00:10:15.008 "num_base_bdevs_operational": 4, 00:10:15.008 "base_bdevs_list": [ 00:10:15.008 { 00:10:15.008 "name": "BaseBdev1", 00:10:15.008 "uuid": "daa345d5-4625-557d-89a7-4b7132287085", 00:10:15.008 "is_configured": true, 00:10:15.008 "data_offset": 2048, 00:10:15.008 "data_size": 63488 00:10:15.008 }, 00:10:15.008 { 00:10:15.008 "name": "BaseBdev2", 00:10:15.008 "uuid": "0df5d90a-3c0b-57f3-9cef-fd40ecf3c9b2", 00:10:15.008 "is_configured": true, 00:10:15.008 "data_offset": 2048, 00:10:15.008 "data_size": 63488 00:10:15.008 }, 00:10:15.008 { 00:10:15.008 "name": "BaseBdev3", 00:10:15.008 "uuid": "fb7f649a-b555-59ef-a24f-52ca20079402", 00:10:15.008 "is_configured": true, 00:10:15.008 "data_offset": 2048, 00:10:15.008 "data_size": 63488 00:10:15.008 }, 00:10:15.008 { 00:10:15.008 "name": "BaseBdev4", 00:10:15.008 "uuid": "179d942f-82fe-508f-9449-1987983e1e02", 00:10:15.008 "is_configured": true, 00:10:15.008 "data_offset": 2048, 00:10:15.008 "data_size": 63488 00:10:15.008 } 00:10:15.008 ] 00:10:15.008 }' 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.008 16:23:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.577 16:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:15.577 16:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.577 16:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.577 [2024-11-28 16:23:07.123864] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:15.577 [2024-11-28 16:23:07.123965] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.577 [2024-11-28 16:23:07.126392] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.577 [2024-11-28 16:23:07.126444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.577 [2024-11-28 16:23:07.126486] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.577 [2024-11-28 16:23:07.126501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:15.577 { 00:10:15.577 "results": [ 00:10:15.577 { 00:10:15.577 "job": "raid_bdev1", 00:10:15.578 "core_mask": "0x1", 00:10:15.578 "workload": "randrw", 00:10:15.578 "percentage": 50, 00:10:15.578 "status": "finished", 00:10:15.578 "queue_depth": 1, 00:10:15.578 "io_size": 131072, 00:10:15.578 "runtime": 1.372733, 00:10:15.578 "iops": 17077.610868246047, 00:10:15.578 "mibps": 2134.701358530756, 00:10:15.578 "io_failed": 1, 00:10:15.578 "io_timeout": 0, 00:10:15.578 "avg_latency_us": 81.23883534785858, 00:10:15.578 "min_latency_us": 24.929257641921396, 00:10:15.578 "max_latency_us": 1330.7528384279476 00:10:15.578 } 00:10:15.578 ], 00:10:15.578 "core_count": 1 00:10:15.578 } 00:10:15.578 16:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.578 16:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83852 00:10:15.578 16:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 83852 ']' 00:10:15.578 16:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 83852 00:10:15.578 16:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:15.578 16:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.578 16:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83852 00:10:15.578 16:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:15.578 16:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:15.578 killing process with pid 83852 00:10:15.578 16:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83852' 00:10:15.578 16:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 83852 00:10:15.578 [2024-11-28 16:23:07.172754] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.578 16:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 83852 00:10:15.578 [2024-11-28 16:23:07.207744] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.838 16:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dqiIxY32m9 00:10:15.838 16:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:15.838 16:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:15.838 16:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:15.838 16:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:15.838 16:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:15.838 16:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:15.838 ************************************ 00:10:15.838 END TEST raid_write_error_test 00:10:15.838 ************************************ 00:10:15.838 16:23:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:15.838 00:10:15.838 real 0m3.389s 00:10:15.838 user 0m4.280s 00:10:15.838 sys 0m0.562s 00:10:15.838 16:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.838 16:23:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.838 16:23:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:15.838 16:23:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:10:15.838 16:23:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:15.838 16:23:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.838 16:23:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.838 ************************************ 00:10:15.838 START TEST raid_state_function_test 00:10:15.838 ************************************ 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83989 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83989' 00:10:15.838 Process raid pid: 83989 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83989 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 83989 ']' 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:15.838 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.099 [2024-11-28 16:23:07.617615] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:16.099 [2024-11-28 16:23:07.617800] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.099 [2024-11-28 16:23:07.770580] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.099 [2024-11-28 16:23:07.813729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.099 [2024-11-28 16:23:07.854668] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.099 [2024-11-28 16:23:07.854704] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.039 [2024-11-28 16:23:08.447331] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:17.039 [2024-11-28 16:23:08.447426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:17.039 [2024-11-28 16:23:08.447457] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:17.039 [2024-11-28 16:23:08.447479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:17.039 [2024-11-28 16:23:08.447499] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:17.039 [2024-11-28 16:23:08.447526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:17.039 [2024-11-28 16:23:08.447543] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:17.039 [2024-11-28 16:23:08.447562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.039 "name": "Existed_Raid", 00:10:17.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.039 "strip_size_kb": 0, 00:10:17.039 "state": "configuring", 00:10:17.039 "raid_level": "raid1", 00:10:17.039 "superblock": false, 00:10:17.039 "num_base_bdevs": 4, 00:10:17.039 "num_base_bdevs_discovered": 0, 00:10:17.039 "num_base_bdevs_operational": 4, 00:10:17.039 "base_bdevs_list": [ 00:10:17.039 { 00:10:17.039 "name": "BaseBdev1", 00:10:17.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.039 "is_configured": false, 00:10:17.039 "data_offset": 0, 00:10:17.039 "data_size": 0 00:10:17.039 }, 00:10:17.039 { 00:10:17.039 "name": "BaseBdev2", 00:10:17.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.039 "is_configured": false, 00:10:17.039 "data_offset": 0, 00:10:17.039 "data_size": 0 00:10:17.039 }, 00:10:17.039 { 00:10:17.039 "name": "BaseBdev3", 00:10:17.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.039 "is_configured": false, 00:10:17.039 "data_offset": 0, 00:10:17.039 "data_size": 0 00:10:17.039 }, 00:10:17.039 { 00:10:17.039 "name": "BaseBdev4", 00:10:17.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.039 "is_configured": false, 00:10:17.039 "data_offset": 0, 00:10:17.039 "data_size": 0 00:10:17.039 } 00:10:17.039 ] 00:10:17.039 }' 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.039 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.299 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:17.299 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.299 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.299 [2024-11-28 16:23:08.846651] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:17.299 [2024-11-28 16:23:08.846715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:17.299 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.299 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:17.299 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.299 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.299 [2024-11-28 16:23:08.854656] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:17.299 [2024-11-28 16:23:08.854704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:17.299 [2024-11-28 16:23:08.854713] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:17.299 [2024-11-28 16:23:08.854723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:17.299 [2024-11-28 16:23:08.854729] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:17.299 [2024-11-28 16:23:08.854738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:17.299 [2024-11-28 16:23:08.854744] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:17.300 [2024-11-28 16:23:08.854753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.300 [2024-11-28 16:23:08.877886] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:17.300 BaseBdev1 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.300 [ 00:10:17.300 { 00:10:17.300 "name": "BaseBdev1", 00:10:17.300 "aliases": [ 00:10:17.300 "473c9b93-d828-4549-88fa-a8ad9e8aa222" 00:10:17.300 ], 00:10:17.300 "product_name": "Malloc disk", 00:10:17.300 "block_size": 512, 00:10:17.300 "num_blocks": 65536, 00:10:17.300 "uuid": "473c9b93-d828-4549-88fa-a8ad9e8aa222", 00:10:17.300 "assigned_rate_limits": { 00:10:17.300 "rw_ios_per_sec": 0, 00:10:17.300 "rw_mbytes_per_sec": 0, 00:10:17.300 "r_mbytes_per_sec": 0, 00:10:17.300 "w_mbytes_per_sec": 0 00:10:17.300 }, 00:10:17.300 "claimed": true, 00:10:17.300 "claim_type": "exclusive_write", 00:10:17.300 "zoned": false, 00:10:17.300 "supported_io_types": { 00:10:17.300 "read": true, 00:10:17.300 "write": true, 00:10:17.300 "unmap": true, 00:10:17.300 "flush": true, 00:10:17.300 "reset": true, 00:10:17.300 "nvme_admin": false, 00:10:17.300 "nvme_io": false, 00:10:17.300 "nvme_io_md": false, 00:10:17.300 "write_zeroes": true, 00:10:17.300 "zcopy": true, 00:10:17.300 "get_zone_info": false, 00:10:17.300 "zone_management": false, 00:10:17.300 "zone_append": false, 00:10:17.300 "compare": false, 00:10:17.300 "compare_and_write": false, 00:10:17.300 "abort": true, 00:10:17.300 "seek_hole": false, 00:10:17.300 "seek_data": false, 00:10:17.300 "copy": true, 00:10:17.300 "nvme_iov_md": false 00:10:17.300 }, 00:10:17.300 "memory_domains": [ 00:10:17.300 { 00:10:17.300 "dma_device_id": "system", 00:10:17.300 "dma_device_type": 1 00:10:17.300 }, 00:10:17.300 { 00:10:17.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.300 "dma_device_type": 2 00:10:17.300 } 00:10:17.300 ], 00:10:17.300 "driver_specific": {} 00:10:17.300 } 00:10:17.300 ] 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.300 "name": "Existed_Raid", 00:10:17.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.300 "strip_size_kb": 0, 00:10:17.300 "state": "configuring", 00:10:17.300 "raid_level": "raid1", 00:10:17.300 "superblock": false, 00:10:17.300 "num_base_bdevs": 4, 00:10:17.300 "num_base_bdevs_discovered": 1, 00:10:17.300 "num_base_bdevs_operational": 4, 00:10:17.300 "base_bdevs_list": [ 00:10:17.300 { 00:10:17.300 "name": "BaseBdev1", 00:10:17.300 "uuid": "473c9b93-d828-4549-88fa-a8ad9e8aa222", 00:10:17.300 "is_configured": true, 00:10:17.300 "data_offset": 0, 00:10:17.300 "data_size": 65536 00:10:17.300 }, 00:10:17.300 { 00:10:17.300 "name": "BaseBdev2", 00:10:17.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.300 "is_configured": false, 00:10:17.300 "data_offset": 0, 00:10:17.300 "data_size": 0 00:10:17.300 }, 00:10:17.300 { 00:10:17.300 "name": "BaseBdev3", 00:10:17.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.300 "is_configured": false, 00:10:17.300 "data_offset": 0, 00:10:17.300 "data_size": 0 00:10:17.300 }, 00:10:17.300 { 00:10:17.300 "name": "BaseBdev4", 00:10:17.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.300 "is_configured": false, 00:10:17.300 "data_offset": 0, 00:10:17.300 "data_size": 0 00:10:17.300 } 00:10:17.300 ] 00:10:17.300 }' 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.300 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.870 [2024-11-28 16:23:09.373075] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:17.870 [2024-11-28 16:23:09.373241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.870 [2024-11-28 16:23:09.385077] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:17.870 [2024-11-28 16:23:09.387340] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:17.870 [2024-11-28 16:23:09.387437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:17.870 [2024-11-28 16:23:09.387469] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:17.870 [2024-11-28 16:23:09.387491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:17.870 [2024-11-28 16:23:09.387508] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:17.870 [2024-11-28 16:23:09.387527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.870 "name": "Existed_Raid", 00:10:17.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.870 "strip_size_kb": 0, 00:10:17.870 "state": "configuring", 00:10:17.870 "raid_level": "raid1", 00:10:17.870 "superblock": false, 00:10:17.870 "num_base_bdevs": 4, 00:10:17.870 "num_base_bdevs_discovered": 1, 00:10:17.870 "num_base_bdevs_operational": 4, 00:10:17.870 "base_bdevs_list": [ 00:10:17.870 { 00:10:17.870 "name": "BaseBdev1", 00:10:17.870 "uuid": "473c9b93-d828-4549-88fa-a8ad9e8aa222", 00:10:17.870 "is_configured": true, 00:10:17.870 "data_offset": 0, 00:10:17.870 "data_size": 65536 00:10:17.870 }, 00:10:17.870 { 00:10:17.870 "name": "BaseBdev2", 00:10:17.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.870 "is_configured": false, 00:10:17.870 "data_offset": 0, 00:10:17.870 "data_size": 0 00:10:17.870 }, 00:10:17.870 { 00:10:17.870 "name": "BaseBdev3", 00:10:17.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.870 "is_configured": false, 00:10:17.870 "data_offset": 0, 00:10:17.870 "data_size": 0 00:10:17.870 }, 00:10:17.870 { 00:10:17.870 "name": "BaseBdev4", 00:10:17.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.870 "is_configured": false, 00:10:17.870 "data_offset": 0, 00:10:17.870 "data_size": 0 00:10:17.870 } 00:10:17.870 ] 00:10:17.870 }' 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.870 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.130 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:18.130 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.130 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.130 [2024-11-28 16:23:09.870425] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.130 BaseBdev2 00:10:18.130 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.130 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:18.130 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:18.130 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:18.130 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:18.130 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:18.130 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:18.130 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:18.130 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.130 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.130 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.130 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:18.130 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.130 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.130 [ 00:10:18.130 { 00:10:18.130 "name": "BaseBdev2", 00:10:18.130 "aliases": [ 00:10:18.130 "c4b1df80-e416-408e-9a02-1462538834dc" 00:10:18.130 ], 00:10:18.131 "product_name": "Malloc disk", 00:10:18.131 "block_size": 512, 00:10:18.131 "num_blocks": 65536, 00:10:18.131 "uuid": "c4b1df80-e416-408e-9a02-1462538834dc", 00:10:18.131 "assigned_rate_limits": { 00:10:18.131 "rw_ios_per_sec": 0, 00:10:18.131 "rw_mbytes_per_sec": 0, 00:10:18.131 "r_mbytes_per_sec": 0, 00:10:18.131 "w_mbytes_per_sec": 0 00:10:18.131 }, 00:10:18.131 "claimed": true, 00:10:18.131 "claim_type": "exclusive_write", 00:10:18.131 "zoned": false, 00:10:18.131 "supported_io_types": { 00:10:18.397 "read": true, 00:10:18.397 "write": true, 00:10:18.397 "unmap": true, 00:10:18.397 "flush": true, 00:10:18.397 "reset": true, 00:10:18.397 "nvme_admin": false, 00:10:18.397 "nvme_io": false, 00:10:18.397 "nvme_io_md": false, 00:10:18.397 "write_zeroes": true, 00:10:18.397 "zcopy": true, 00:10:18.397 "get_zone_info": false, 00:10:18.397 "zone_management": false, 00:10:18.397 "zone_append": false, 00:10:18.397 "compare": false, 00:10:18.397 "compare_and_write": false, 00:10:18.397 "abort": true, 00:10:18.397 "seek_hole": false, 00:10:18.397 "seek_data": false, 00:10:18.397 "copy": true, 00:10:18.397 "nvme_iov_md": false 00:10:18.397 }, 00:10:18.397 "memory_domains": [ 00:10:18.397 { 00:10:18.397 "dma_device_id": "system", 00:10:18.397 "dma_device_type": 1 00:10:18.397 }, 00:10:18.397 { 00:10:18.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.397 "dma_device_type": 2 00:10:18.397 } 00:10:18.397 ], 00:10:18.397 "driver_specific": {} 00:10:18.397 } 00:10:18.397 ] 00:10:18.397 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.397 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:18.397 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:18.397 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:18.397 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:18.397 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.397 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.397 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.397 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.397 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.397 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.397 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.397 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.397 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.397 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.397 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.397 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.397 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.397 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.397 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.397 "name": "Existed_Raid", 00:10:18.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.397 "strip_size_kb": 0, 00:10:18.397 "state": "configuring", 00:10:18.397 "raid_level": "raid1", 00:10:18.397 "superblock": false, 00:10:18.397 "num_base_bdevs": 4, 00:10:18.397 "num_base_bdevs_discovered": 2, 00:10:18.397 "num_base_bdevs_operational": 4, 00:10:18.397 "base_bdevs_list": [ 00:10:18.397 { 00:10:18.397 "name": "BaseBdev1", 00:10:18.397 "uuid": "473c9b93-d828-4549-88fa-a8ad9e8aa222", 00:10:18.397 "is_configured": true, 00:10:18.397 "data_offset": 0, 00:10:18.397 "data_size": 65536 00:10:18.397 }, 00:10:18.397 { 00:10:18.397 "name": "BaseBdev2", 00:10:18.397 "uuid": "c4b1df80-e416-408e-9a02-1462538834dc", 00:10:18.397 "is_configured": true, 00:10:18.397 "data_offset": 0, 00:10:18.397 "data_size": 65536 00:10:18.397 }, 00:10:18.397 { 00:10:18.398 "name": "BaseBdev3", 00:10:18.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.398 "is_configured": false, 00:10:18.398 "data_offset": 0, 00:10:18.398 "data_size": 0 00:10:18.398 }, 00:10:18.398 { 00:10:18.398 "name": "BaseBdev4", 00:10:18.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.398 "is_configured": false, 00:10:18.398 "data_offset": 0, 00:10:18.398 "data_size": 0 00:10:18.398 } 00:10:18.398 ] 00:10:18.398 }' 00:10:18.398 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.398 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.675 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:18.675 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.675 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.675 [2024-11-28 16:23:10.384596] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:18.675 BaseBdev3 00:10:18.675 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.675 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:18.675 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:18.675 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:18.675 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:18.675 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:18.675 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:18.675 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:18.675 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.675 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.675 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.675 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:18.675 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.675 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.675 [ 00:10:18.675 { 00:10:18.675 "name": "BaseBdev3", 00:10:18.675 "aliases": [ 00:10:18.675 "765e7a97-73af-4714-9ac5-717ae5502026" 00:10:18.675 ], 00:10:18.675 "product_name": "Malloc disk", 00:10:18.675 "block_size": 512, 00:10:18.675 "num_blocks": 65536, 00:10:18.675 "uuid": "765e7a97-73af-4714-9ac5-717ae5502026", 00:10:18.675 "assigned_rate_limits": { 00:10:18.675 "rw_ios_per_sec": 0, 00:10:18.675 "rw_mbytes_per_sec": 0, 00:10:18.675 "r_mbytes_per_sec": 0, 00:10:18.675 "w_mbytes_per_sec": 0 00:10:18.675 }, 00:10:18.675 "claimed": true, 00:10:18.675 "claim_type": "exclusive_write", 00:10:18.675 "zoned": false, 00:10:18.675 "supported_io_types": { 00:10:18.675 "read": true, 00:10:18.675 "write": true, 00:10:18.675 "unmap": true, 00:10:18.676 "flush": true, 00:10:18.676 "reset": true, 00:10:18.676 "nvme_admin": false, 00:10:18.676 "nvme_io": false, 00:10:18.676 "nvme_io_md": false, 00:10:18.676 "write_zeroes": true, 00:10:18.676 "zcopy": true, 00:10:18.676 "get_zone_info": false, 00:10:18.676 "zone_management": false, 00:10:18.676 "zone_append": false, 00:10:18.676 "compare": false, 00:10:18.676 "compare_and_write": false, 00:10:18.676 "abort": true, 00:10:18.676 "seek_hole": false, 00:10:18.676 "seek_data": false, 00:10:18.676 "copy": true, 00:10:18.676 "nvme_iov_md": false 00:10:18.676 }, 00:10:18.676 "memory_domains": [ 00:10:18.676 { 00:10:18.676 "dma_device_id": "system", 00:10:18.676 "dma_device_type": 1 00:10:18.676 }, 00:10:18.676 { 00:10:18.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.676 "dma_device_type": 2 00:10:18.676 } 00:10:18.676 ], 00:10:18.676 "driver_specific": {} 00:10:18.676 } 00:10:18.676 ] 00:10:18.676 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.676 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:18.676 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:18.676 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:18.676 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:18.676 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.676 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.676 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.676 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.676 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.676 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.676 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.676 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.676 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.676 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.676 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.676 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.676 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.935 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.935 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.935 "name": "Existed_Raid", 00:10:18.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.935 "strip_size_kb": 0, 00:10:18.935 "state": "configuring", 00:10:18.935 "raid_level": "raid1", 00:10:18.935 "superblock": false, 00:10:18.935 "num_base_bdevs": 4, 00:10:18.935 "num_base_bdevs_discovered": 3, 00:10:18.935 "num_base_bdevs_operational": 4, 00:10:18.935 "base_bdevs_list": [ 00:10:18.935 { 00:10:18.935 "name": "BaseBdev1", 00:10:18.935 "uuid": "473c9b93-d828-4549-88fa-a8ad9e8aa222", 00:10:18.935 "is_configured": true, 00:10:18.935 "data_offset": 0, 00:10:18.935 "data_size": 65536 00:10:18.935 }, 00:10:18.935 { 00:10:18.935 "name": "BaseBdev2", 00:10:18.935 "uuid": "c4b1df80-e416-408e-9a02-1462538834dc", 00:10:18.935 "is_configured": true, 00:10:18.935 "data_offset": 0, 00:10:18.935 "data_size": 65536 00:10:18.935 }, 00:10:18.935 { 00:10:18.935 "name": "BaseBdev3", 00:10:18.935 "uuid": "765e7a97-73af-4714-9ac5-717ae5502026", 00:10:18.935 "is_configured": true, 00:10:18.935 "data_offset": 0, 00:10:18.935 "data_size": 65536 00:10:18.935 }, 00:10:18.935 { 00:10:18.935 "name": "BaseBdev4", 00:10:18.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.935 "is_configured": false, 00:10:18.935 "data_offset": 0, 00:10:18.935 "data_size": 0 00:10:18.935 } 00:10:18.935 ] 00:10:18.935 }' 00:10:18.935 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.936 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.195 [2024-11-28 16:23:10.842911] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:19.195 [2024-11-28 16:23:10.843028] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:19.195 [2024-11-28 16:23:10.843053] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:19.195 [2024-11-28 16:23:10.843357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:19.195 [2024-11-28 16:23:10.843534] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:19.195 [2024-11-28 16:23:10.843581] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:19.195 [2024-11-28 16:23:10.843855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.195 BaseBdev4 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.195 [ 00:10:19.195 { 00:10:19.195 "name": "BaseBdev4", 00:10:19.195 "aliases": [ 00:10:19.195 "115809cd-b0c8-43cf-814d-42810cc1a740" 00:10:19.195 ], 00:10:19.195 "product_name": "Malloc disk", 00:10:19.195 "block_size": 512, 00:10:19.195 "num_blocks": 65536, 00:10:19.195 "uuid": "115809cd-b0c8-43cf-814d-42810cc1a740", 00:10:19.195 "assigned_rate_limits": { 00:10:19.195 "rw_ios_per_sec": 0, 00:10:19.195 "rw_mbytes_per_sec": 0, 00:10:19.195 "r_mbytes_per_sec": 0, 00:10:19.195 "w_mbytes_per_sec": 0 00:10:19.195 }, 00:10:19.195 "claimed": true, 00:10:19.195 "claim_type": "exclusive_write", 00:10:19.195 "zoned": false, 00:10:19.195 "supported_io_types": { 00:10:19.195 "read": true, 00:10:19.195 "write": true, 00:10:19.195 "unmap": true, 00:10:19.195 "flush": true, 00:10:19.195 "reset": true, 00:10:19.195 "nvme_admin": false, 00:10:19.195 "nvme_io": false, 00:10:19.195 "nvme_io_md": false, 00:10:19.195 "write_zeroes": true, 00:10:19.195 "zcopy": true, 00:10:19.195 "get_zone_info": false, 00:10:19.195 "zone_management": false, 00:10:19.195 "zone_append": false, 00:10:19.195 "compare": false, 00:10:19.195 "compare_and_write": false, 00:10:19.195 "abort": true, 00:10:19.195 "seek_hole": false, 00:10:19.195 "seek_data": false, 00:10:19.195 "copy": true, 00:10:19.195 "nvme_iov_md": false 00:10:19.195 }, 00:10:19.195 "memory_domains": [ 00:10:19.195 { 00:10:19.195 "dma_device_id": "system", 00:10:19.195 "dma_device_type": 1 00:10:19.195 }, 00:10:19.195 { 00:10:19.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.195 "dma_device_type": 2 00:10:19.195 } 00:10:19.195 ], 00:10:19.195 "driver_specific": {} 00:10:19.195 } 00:10:19.195 ] 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.195 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.195 "name": "Existed_Raid", 00:10:19.195 "uuid": "011f0fd7-e9c6-4970-ab39-bc2b25ea0bfe", 00:10:19.195 "strip_size_kb": 0, 00:10:19.195 "state": "online", 00:10:19.195 "raid_level": "raid1", 00:10:19.195 "superblock": false, 00:10:19.195 "num_base_bdevs": 4, 00:10:19.195 "num_base_bdevs_discovered": 4, 00:10:19.195 "num_base_bdevs_operational": 4, 00:10:19.195 "base_bdevs_list": [ 00:10:19.195 { 00:10:19.195 "name": "BaseBdev1", 00:10:19.195 "uuid": "473c9b93-d828-4549-88fa-a8ad9e8aa222", 00:10:19.195 "is_configured": true, 00:10:19.195 "data_offset": 0, 00:10:19.195 "data_size": 65536 00:10:19.195 }, 00:10:19.195 { 00:10:19.195 "name": "BaseBdev2", 00:10:19.195 "uuid": "c4b1df80-e416-408e-9a02-1462538834dc", 00:10:19.195 "is_configured": true, 00:10:19.195 "data_offset": 0, 00:10:19.195 "data_size": 65536 00:10:19.195 }, 00:10:19.195 { 00:10:19.195 "name": "BaseBdev3", 00:10:19.195 "uuid": "765e7a97-73af-4714-9ac5-717ae5502026", 00:10:19.195 "is_configured": true, 00:10:19.195 "data_offset": 0, 00:10:19.196 "data_size": 65536 00:10:19.196 }, 00:10:19.196 { 00:10:19.196 "name": "BaseBdev4", 00:10:19.196 "uuid": "115809cd-b0c8-43cf-814d-42810cc1a740", 00:10:19.196 "is_configured": true, 00:10:19.196 "data_offset": 0, 00:10:19.196 "data_size": 65536 00:10:19.196 } 00:10:19.196 ] 00:10:19.196 }' 00:10:19.196 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.196 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.765 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:19.765 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:19.765 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:19.765 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:19.765 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:19.765 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:19.765 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:19.765 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:19.765 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.765 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.765 [2024-11-28 16:23:11.354461] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.765 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.765 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:19.765 "name": "Existed_Raid", 00:10:19.765 "aliases": [ 00:10:19.765 "011f0fd7-e9c6-4970-ab39-bc2b25ea0bfe" 00:10:19.765 ], 00:10:19.765 "product_name": "Raid Volume", 00:10:19.765 "block_size": 512, 00:10:19.765 "num_blocks": 65536, 00:10:19.765 "uuid": "011f0fd7-e9c6-4970-ab39-bc2b25ea0bfe", 00:10:19.765 "assigned_rate_limits": { 00:10:19.765 "rw_ios_per_sec": 0, 00:10:19.765 "rw_mbytes_per_sec": 0, 00:10:19.765 "r_mbytes_per_sec": 0, 00:10:19.765 "w_mbytes_per_sec": 0 00:10:19.765 }, 00:10:19.765 "claimed": false, 00:10:19.765 "zoned": false, 00:10:19.765 "supported_io_types": { 00:10:19.765 "read": true, 00:10:19.765 "write": true, 00:10:19.765 "unmap": false, 00:10:19.765 "flush": false, 00:10:19.765 "reset": true, 00:10:19.765 "nvme_admin": false, 00:10:19.765 "nvme_io": false, 00:10:19.765 "nvme_io_md": false, 00:10:19.765 "write_zeroes": true, 00:10:19.765 "zcopy": false, 00:10:19.765 "get_zone_info": false, 00:10:19.765 "zone_management": false, 00:10:19.765 "zone_append": false, 00:10:19.765 "compare": false, 00:10:19.765 "compare_and_write": false, 00:10:19.765 "abort": false, 00:10:19.765 "seek_hole": false, 00:10:19.765 "seek_data": false, 00:10:19.765 "copy": false, 00:10:19.765 "nvme_iov_md": false 00:10:19.765 }, 00:10:19.765 "memory_domains": [ 00:10:19.765 { 00:10:19.765 "dma_device_id": "system", 00:10:19.765 "dma_device_type": 1 00:10:19.765 }, 00:10:19.765 { 00:10:19.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.765 "dma_device_type": 2 00:10:19.765 }, 00:10:19.765 { 00:10:19.765 "dma_device_id": "system", 00:10:19.765 "dma_device_type": 1 00:10:19.765 }, 00:10:19.765 { 00:10:19.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.765 "dma_device_type": 2 00:10:19.765 }, 00:10:19.765 { 00:10:19.765 "dma_device_id": "system", 00:10:19.765 "dma_device_type": 1 00:10:19.765 }, 00:10:19.765 { 00:10:19.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.765 "dma_device_type": 2 00:10:19.765 }, 00:10:19.765 { 00:10:19.765 "dma_device_id": "system", 00:10:19.765 "dma_device_type": 1 00:10:19.765 }, 00:10:19.765 { 00:10:19.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.765 "dma_device_type": 2 00:10:19.765 } 00:10:19.765 ], 00:10:19.765 "driver_specific": { 00:10:19.765 "raid": { 00:10:19.765 "uuid": "011f0fd7-e9c6-4970-ab39-bc2b25ea0bfe", 00:10:19.765 "strip_size_kb": 0, 00:10:19.765 "state": "online", 00:10:19.765 "raid_level": "raid1", 00:10:19.765 "superblock": false, 00:10:19.765 "num_base_bdevs": 4, 00:10:19.765 "num_base_bdevs_discovered": 4, 00:10:19.765 "num_base_bdevs_operational": 4, 00:10:19.765 "base_bdevs_list": [ 00:10:19.765 { 00:10:19.765 "name": "BaseBdev1", 00:10:19.765 "uuid": "473c9b93-d828-4549-88fa-a8ad9e8aa222", 00:10:19.765 "is_configured": true, 00:10:19.765 "data_offset": 0, 00:10:19.765 "data_size": 65536 00:10:19.765 }, 00:10:19.765 { 00:10:19.765 "name": "BaseBdev2", 00:10:19.765 "uuid": "c4b1df80-e416-408e-9a02-1462538834dc", 00:10:19.765 "is_configured": true, 00:10:19.765 "data_offset": 0, 00:10:19.765 "data_size": 65536 00:10:19.765 }, 00:10:19.765 { 00:10:19.765 "name": "BaseBdev3", 00:10:19.765 "uuid": "765e7a97-73af-4714-9ac5-717ae5502026", 00:10:19.765 "is_configured": true, 00:10:19.765 "data_offset": 0, 00:10:19.765 "data_size": 65536 00:10:19.765 }, 00:10:19.765 { 00:10:19.765 "name": "BaseBdev4", 00:10:19.765 "uuid": "115809cd-b0c8-43cf-814d-42810cc1a740", 00:10:19.765 "is_configured": true, 00:10:19.765 "data_offset": 0, 00:10:19.765 "data_size": 65536 00:10:19.765 } 00:10:19.765 ] 00:10:19.765 } 00:10:19.765 } 00:10:19.765 }' 00:10:19.765 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:19.765 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:19.765 BaseBdev2 00:10:19.765 BaseBdev3 00:10:19.765 BaseBdev4' 00:10:19.765 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.765 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:19.765 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.765 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:19.765 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.765 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.765 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.765 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.027 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.027 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.027 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.027 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:20.027 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.027 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.027 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.027 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.027 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.027 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.027 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.027 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:20.027 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.027 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.027 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.027 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.027 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.027 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.027 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.027 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.027 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.028 [2024-11-28 16:23:11.673605] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.028 "name": "Existed_Raid", 00:10:20.028 "uuid": "011f0fd7-e9c6-4970-ab39-bc2b25ea0bfe", 00:10:20.028 "strip_size_kb": 0, 00:10:20.028 "state": "online", 00:10:20.028 "raid_level": "raid1", 00:10:20.028 "superblock": false, 00:10:20.028 "num_base_bdevs": 4, 00:10:20.028 "num_base_bdevs_discovered": 3, 00:10:20.028 "num_base_bdevs_operational": 3, 00:10:20.028 "base_bdevs_list": [ 00:10:20.028 { 00:10:20.028 "name": null, 00:10:20.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.028 "is_configured": false, 00:10:20.028 "data_offset": 0, 00:10:20.028 "data_size": 65536 00:10:20.028 }, 00:10:20.028 { 00:10:20.028 "name": "BaseBdev2", 00:10:20.028 "uuid": "c4b1df80-e416-408e-9a02-1462538834dc", 00:10:20.028 "is_configured": true, 00:10:20.028 "data_offset": 0, 00:10:20.028 "data_size": 65536 00:10:20.028 }, 00:10:20.028 { 00:10:20.028 "name": "BaseBdev3", 00:10:20.028 "uuid": "765e7a97-73af-4714-9ac5-717ae5502026", 00:10:20.028 "is_configured": true, 00:10:20.028 "data_offset": 0, 00:10:20.028 "data_size": 65536 00:10:20.028 }, 00:10:20.028 { 00:10:20.028 "name": "BaseBdev4", 00:10:20.028 "uuid": "115809cd-b0c8-43cf-814d-42810cc1a740", 00:10:20.028 "is_configured": true, 00:10:20.028 "data_offset": 0, 00:10:20.028 "data_size": 65536 00:10:20.028 } 00:10:20.028 ] 00:10:20.028 }' 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.028 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.597 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.598 [2024-11-28 16:23:12.211818] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.598 [2024-11-28 16:23:12.278851] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.598 [2024-11-28 16:23:12.345812] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:20.598 [2024-11-28 16:23:12.345960] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.598 [2024-11-28 16:23:12.357473] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.598 [2024-11-28 16:23:12.357591] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.598 [2024-11-28 16:23:12.357633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.598 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.858 BaseBdev2 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.858 [ 00:10:20.858 { 00:10:20.858 "name": "BaseBdev2", 00:10:20.858 "aliases": [ 00:10:20.858 "95763d5d-d60c-4475-851c-a8b6a22adaa1" 00:10:20.858 ], 00:10:20.858 "product_name": "Malloc disk", 00:10:20.858 "block_size": 512, 00:10:20.858 "num_blocks": 65536, 00:10:20.858 "uuid": "95763d5d-d60c-4475-851c-a8b6a22adaa1", 00:10:20.858 "assigned_rate_limits": { 00:10:20.858 "rw_ios_per_sec": 0, 00:10:20.858 "rw_mbytes_per_sec": 0, 00:10:20.858 "r_mbytes_per_sec": 0, 00:10:20.858 "w_mbytes_per_sec": 0 00:10:20.858 }, 00:10:20.858 "claimed": false, 00:10:20.858 "zoned": false, 00:10:20.858 "supported_io_types": { 00:10:20.858 "read": true, 00:10:20.858 "write": true, 00:10:20.858 "unmap": true, 00:10:20.858 "flush": true, 00:10:20.858 "reset": true, 00:10:20.858 "nvme_admin": false, 00:10:20.858 "nvme_io": false, 00:10:20.858 "nvme_io_md": false, 00:10:20.858 "write_zeroes": true, 00:10:20.858 "zcopy": true, 00:10:20.858 "get_zone_info": false, 00:10:20.858 "zone_management": false, 00:10:20.858 "zone_append": false, 00:10:20.858 "compare": false, 00:10:20.858 "compare_and_write": false, 00:10:20.858 "abort": true, 00:10:20.858 "seek_hole": false, 00:10:20.858 "seek_data": false, 00:10:20.858 "copy": true, 00:10:20.858 "nvme_iov_md": false 00:10:20.858 }, 00:10:20.858 "memory_domains": [ 00:10:20.858 { 00:10:20.858 "dma_device_id": "system", 00:10:20.858 "dma_device_type": 1 00:10:20.858 }, 00:10:20.858 { 00:10:20.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.858 "dma_device_type": 2 00:10:20.858 } 00:10:20.858 ], 00:10:20.858 "driver_specific": {} 00:10:20.858 } 00:10:20.858 ] 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:20.858 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.859 BaseBdev3 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.859 [ 00:10:20.859 { 00:10:20.859 "name": "BaseBdev3", 00:10:20.859 "aliases": [ 00:10:20.859 "c23a3998-e5d0-4260-a94a-fb493ea4378c" 00:10:20.859 ], 00:10:20.859 "product_name": "Malloc disk", 00:10:20.859 "block_size": 512, 00:10:20.859 "num_blocks": 65536, 00:10:20.859 "uuid": "c23a3998-e5d0-4260-a94a-fb493ea4378c", 00:10:20.859 "assigned_rate_limits": { 00:10:20.859 "rw_ios_per_sec": 0, 00:10:20.859 "rw_mbytes_per_sec": 0, 00:10:20.859 "r_mbytes_per_sec": 0, 00:10:20.859 "w_mbytes_per_sec": 0 00:10:20.859 }, 00:10:20.859 "claimed": false, 00:10:20.859 "zoned": false, 00:10:20.859 "supported_io_types": { 00:10:20.859 "read": true, 00:10:20.859 "write": true, 00:10:20.859 "unmap": true, 00:10:20.859 "flush": true, 00:10:20.859 "reset": true, 00:10:20.859 "nvme_admin": false, 00:10:20.859 "nvme_io": false, 00:10:20.859 "nvme_io_md": false, 00:10:20.859 "write_zeroes": true, 00:10:20.859 "zcopy": true, 00:10:20.859 "get_zone_info": false, 00:10:20.859 "zone_management": false, 00:10:20.859 "zone_append": false, 00:10:20.859 "compare": false, 00:10:20.859 "compare_and_write": false, 00:10:20.859 "abort": true, 00:10:20.859 "seek_hole": false, 00:10:20.859 "seek_data": false, 00:10:20.859 "copy": true, 00:10:20.859 "nvme_iov_md": false 00:10:20.859 }, 00:10:20.859 "memory_domains": [ 00:10:20.859 { 00:10:20.859 "dma_device_id": "system", 00:10:20.859 "dma_device_type": 1 00:10:20.859 }, 00:10:20.859 { 00:10:20.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.859 "dma_device_type": 2 00:10:20.859 } 00:10:20.859 ], 00:10:20.859 "driver_specific": {} 00:10:20.859 } 00:10:20.859 ] 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.859 BaseBdev4 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.859 [ 00:10:20.859 { 00:10:20.859 "name": "BaseBdev4", 00:10:20.859 "aliases": [ 00:10:20.859 "8c089435-876b-4268-931d-929278a9978f" 00:10:20.859 ], 00:10:20.859 "product_name": "Malloc disk", 00:10:20.859 "block_size": 512, 00:10:20.859 "num_blocks": 65536, 00:10:20.859 "uuid": "8c089435-876b-4268-931d-929278a9978f", 00:10:20.859 "assigned_rate_limits": { 00:10:20.859 "rw_ios_per_sec": 0, 00:10:20.859 "rw_mbytes_per_sec": 0, 00:10:20.859 "r_mbytes_per_sec": 0, 00:10:20.859 "w_mbytes_per_sec": 0 00:10:20.859 }, 00:10:20.859 "claimed": false, 00:10:20.859 "zoned": false, 00:10:20.859 "supported_io_types": { 00:10:20.859 "read": true, 00:10:20.859 "write": true, 00:10:20.859 "unmap": true, 00:10:20.859 "flush": true, 00:10:20.859 "reset": true, 00:10:20.859 "nvme_admin": false, 00:10:20.859 "nvme_io": false, 00:10:20.859 "nvme_io_md": false, 00:10:20.859 "write_zeroes": true, 00:10:20.859 "zcopy": true, 00:10:20.859 "get_zone_info": false, 00:10:20.859 "zone_management": false, 00:10:20.859 "zone_append": false, 00:10:20.859 "compare": false, 00:10:20.859 "compare_and_write": false, 00:10:20.859 "abort": true, 00:10:20.859 "seek_hole": false, 00:10:20.859 "seek_data": false, 00:10:20.859 "copy": true, 00:10:20.859 "nvme_iov_md": false 00:10:20.859 }, 00:10:20.859 "memory_domains": [ 00:10:20.859 { 00:10:20.859 "dma_device_id": "system", 00:10:20.859 "dma_device_type": 1 00:10:20.859 }, 00:10:20.859 { 00:10:20.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.859 "dma_device_type": 2 00:10:20.859 } 00:10:20.859 ], 00:10:20.859 "driver_specific": {} 00:10:20.859 } 00:10:20.859 ] 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.859 [2024-11-28 16:23:12.572578] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:20.859 [2024-11-28 16:23:12.572672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:20.859 [2024-11-28 16:23:12.572711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.859 [2024-11-28 16:23:12.574506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:20.859 [2024-11-28 16:23:12.574588] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.859 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.119 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.119 "name": "Existed_Raid", 00:10:21.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.119 "strip_size_kb": 0, 00:10:21.119 "state": "configuring", 00:10:21.119 "raid_level": "raid1", 00:10:21.119 "superblock": false, 00:10:21.119 "num_base_bdevs": 4, 00:10:21.119 "num_base_bdevs_discovered": 3, 00:10:21.119 "num_base_bdevs_operational": 4, 00:10:21.119 "base_bdevs_list": [ 00:10:21.119 { 00:10:21.119 "name": "BaseBdev1", 00:10:21.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.119 "is_configured": false, 00:10:21.119 "data_offset": 0, 00:10:21.119 "data_size": 0 00:10:21.119 }, 00:10:21.119 { 00:10:21.119 "name": "BaseBdev2", 00:10:21.119 "uuid": "95763d5d-d60c-4475-851c-a8b6a22adaa1", 00:10:21.119 "is_configured": true, 00:10:21.119 "data_offset": 0, 00:10:21.119 "data_size": 65536 00:10:21.119 }, 00:10:21.119 { 00:10:21.119 "name": "BaseBdev3", 00:10:21.119 "uuid": "c23a3998-e5d0-4260-a94a-fb493ea4378c", 00:10:21.119 "is_configured": true, 00:10:21.119 "data_offset": 0, 00:10:21.119 "data_size": 65536 00:10:21.119 }, 00:10:21.119 { 00:10:21.119 "name": "BaseBdev4", 00:10:21.119 "uuid": "8c089435-876b-4268-931d-929278a9978f", 00:10:21.119 "is_configured": true, 00:10:21.119 "data_offset": 0, 00:10:21.119 "data_size": 65536 00:10:21.119 } 00:10:21.119 ] 00:10:21.119 }' 00:10:21.119 16:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.119 16:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.379 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:21.379 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.379 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.379 [2024-11-28 16:23:13.079791] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:21.379 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.379 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:21.379 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.379 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.379 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.379 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.379 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.379 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.379 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.379 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.379 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.379 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.379 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.379 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.379 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.379 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.379 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.379 "name": "Existed_Raid", 00:10:21.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.379 "strip_size_kb": 0, 00:10:21.379 "state": "configuring", 00:10:21.379 "raid_level": "raid1", 00:10:21.379 "superblock": false, 00:10:21.379 "num_base_bdevs": 4, 00:10:21.379 "num_base_bdevs_discovered": 2, 00:10:21.379 "num_base_bdevs_operational": 4, 00:10:21.379 "base_bdevs_list": [ 00:10:21.379 { 00:10:21.379 "name": "BaseBdev1", 00:10:21.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.379 "is_configured": false, 00:10:21.379 "data_offset": 0, 00:10:21.379 "data_size": 0 00:10:21.379 }, 00:10:21.379 { 00:10:21.379 "name": null, 00:10:21.379 "uuid": "95763d5d-d60c-4475-851c-a8b6a22adaa1", 00:10:21.379 "is_configured": false, 00:10:21.379 "data_offset": 0, 00:10:21.379 "data_size": 65536 00:10:21.379 }, 00:10:21.379 { 00:10:21.379 "name": "BaseBdev3", 00:10:21.379 "uuid": "c23a3998-e5d0-4260-a94a-fb493ea4378c", 00:10:21.379 "is_configured": true, 00:10:21.379 "data_offset": 0, 00:10:21.379 "data_size": 65536 00:10:21.379 }, 00:10:21.379 { 00:10:21.379 "name": "BaseBdev4", 00:10:21.379 "uuid": "8c089435-876b-4268-931d-929278a9978f", 00:10:21.379 "is_configured": true, 00:10:21.379 "data_offset": 0, 00:10:21.379 "data_size": 65536 00:10:21.379 } 00:10:21.379 ] 00:10:21.379 }' 00:10:21.379 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.379 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.947 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.947 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:21.947 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.947 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.947 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.947 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:21.947 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:21.947 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.947 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.947 [2024-11-28 16:23:13.593978] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:21.947 BaseBdev1 00:10:21.947 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.947 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:21.947 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:21.947 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:21.947 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:21.947 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:21.947 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:21.947 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:21.947 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.947 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.947 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.947 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:21.947 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.947 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.947 [ 00:10:21.947 { 00:10:21.947 "name": "BaseBdev1", 00:10:21.947 "aliases": [ 00:10:21.947 "220fc877-8a5c-4826-885e-d27f1506f2c5" 00:10:21.947 ], 00:10:21.947 "product_name": "Malloc disk", 00:10:21.947 "block_size": 512, 00:10:21.947 "num_blocks": 65536, 00:10:21.947 "uuid": "220fc877-8a5c-4826-885e-d27f1506f2c5", 00:10:21.948 "assigned_rate_limits": { 00:10:21.948 "rw_ios_per_sec": 0, 00:10:21.948 "rw_mbytes_per_sec": 0, 00:10:21.948 "r_mbytes_per_sec": 0, 00:10:21.948 "w_mbytes_per_sec": 0 00:10:21.948 }, 00:10:21.948 "claimed": true, 00:10:21.948 "claim_type": "exclusive_write", 00:10:21.948 "zoned": false, 00:10:21.948 "supported_io_types": { 00:10:21.948 "read": true, 00:10:21.948 "write": true, 00:10:21.948 "unmap": true, 00:10:21.948 "flush": true, 00:10:21.948 "reset": true, 00:10:21.948 "nvme_admin": false, 00:10:21.948 "nvme_io": false, 00:10:21.948 "nvme_io_md": false, 00:10:21.948 "write_zeroes": true, 00:10:21.948 "zcopy": true, 00:10:21.948 "get_zone_info": false, 00:10:21.948 "zone_management": false, 00:10:21.948 "zone_append": false, 00:10:21.948 "compare": false, 00:10:21.948 "compare_and_write": false, 00:10:21.948 "abort": true, 00:10:21.948 "seek_hole": false, 00:10:21.948 "seek_data": false, 00:10:21.948 "copy": true, 00:10:21.948 "nvme_iov_md": false 00:10:21.948 }, 00:10:21.948 "memory_domains": [ 00:10:21.948 { 00:10:21.948 "dma_device_id": "system", 00:10:21.948 "dma_device_type": 1 00:10:21.948 }, 00:10:21.948 { 00:10:21.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.948 "dma_device_type": 2 00:10:21.948 } 00:10:21.948 ], 00:10:21.948 "driver_specific": {} 00:10:21.948 } 00:10:21.948 ] 00:10:21.948 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.948 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:21.948 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:21.948 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.948 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.948 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.948 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.948 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.948 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.948 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.948 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.948 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.948 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.948 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.948 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.948 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.948 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.948 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.948 "name": "Existed_Raid", 00:10:21.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.948 "strip_size_kb": 0, 00:10:21.948 "state": "configuring", 00:10:21.948 "raid_level": "raid1", 00:10:21.948 "superblock": false, 00:10:21.948 "num_base_bdevs": 4, 00:10:21.948 "num_base_bdevs_discovered": 3, 00:10:21.948 "num_base_bdevs_operational": 4, 00:10:21.948 "base_bdevs_list": [ 00:10:21.948 { 00:10:21.948 "name": "BaseBdev1", 00:10:21.948 "uuid": "220fc877-8a5c-4826-885e-d27f1506f2c5", 00:10:21.948 "is_configured": true, 00:10:21.948 "data_offset": 0, 00:10:21.948 "data_size": 65536 00:10:21.948 }, 00:10:21.948 { 00:10:21.948 "name": null, 00:10:21.948 "uuid": "95763d5d-d60c-4475-851c-a8b6a22adaa1", 00:10:21.948 "is_configured": false, 00:10:21.948 "data_offset": 0, 00:10:21.948 "data_size": 65536 00:10:21.948 }, 00:10:21.948 { 00:10:21.948 "name": "BaseBdev3", 00:10:21.948 "uuid": "c23a3998-e5d0-4260-a94a-fb493ea4378c", 00:10:21.948 "is_configured": true, 00:10:21.948 "data_offset": 0, 00:10:21.948 "data_size": 65536 00:10:21.948 }, 00:10:21.948 { 00:10:21.948 "name": "BaseBdev4", 00:10:21.948 "uuid": "8c089435-876b-4268-931d-929278a9978f", 00:10:21.948 "is_configured": true, 00:10:21.948 "data_offset": 0, 00:10:21.948 "data_size": 65536 00:10:21.948 } 00:10:21.948 ] 00:10:21.948 }' 00:10:21.948 16:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.948 16:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.518 [2024-11-28 16:23:14.093191] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.518 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.518 "name": "Existed_Raid", 00:10:22.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.518 "strip_size_kb": 0, 00:10:22.518 "state": "configuring", 00:10:22.518 "raid_level": "raid1", 00:10:22.518 "superblock": false, 00:10:22.518 "num_base_bdevs": 4, 00:10:22.518 "num_base_bdevs_discovered": 2, 00:10:22.518 "num_base_bdevs_operational": 4, 00:10:22.518 "base_bdevs_list": [ 00:10:22.518 { 00:10:22.518 "name": "BaseBdev1", 00:10:22.518 "uuid": "220fc877-8a5c-4826-885e-d27f1506f2c5", 00:10:22.518 "is_configured": true, 00:10:22.518 "data_offset": 0, 00:10:22.518 "data_size": 65536 00:10:22.518 }, 00:10:22.518 { 00:10:22.518 "name": null, 00:10:22.518 "uuid": "95763d5d-d60c-4475-851c-a8b6a22adaa1", 00:10:22.518 "is_configured": false, 00:10:22.518 "data_offset": 0, 00:10:22.518 "data_size": 65536 00:10:22.518 }, 00:10:22.518 { 00:10:22.518 "name": null, 00:10:22.518 "uuid": "c23a3998-e5d0-4260-a94a-fb493ea4378c", 00:10:22.518 "is_configured": false, 00:10:22.518 "data_offset": 0, 00:10:22.518 "data_size": 65536 00:10:22.518 }, 00:10:22.518 { 00:10:22.518 "name": "BaseBdev4", 00:10:22.518 "uuid": "8c089435-876b-4268-931d-929278a9978f", 00:10:22.518 "is_configured": true, 00:10:22.518 "data_offset": 0, 00:10:22.518 "data_size": 65536 00:10:22.518 } 00:10:22.518 ] 00:10:22.518 }' 00:10:22.519 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.519 16:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.779 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.779 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:22.779 16:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.779 16:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.040 [2024-11-28 16:23:14.596397] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.040 "name": "Existed_Raid", 00:10:23.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.040 "strip_size_kb": 0, 00:10:23.040 "state": "configuring", 00:10:23.040 "raid_level": "raid1", 00:10:23.040 "superblock": false, 00:10:23.040 "num_base_bdevs": 4, 00:10:23.040 "num_base_bdevs_discovered": 3, 00:10:23.040 "num_base_bdevs_operational": 4, 00:10:23.040 "base_bdevs_list": [ 00:10:23.040 { 00:10:23.040 "name": "BaseBdev1", 00:10:23.040 "uuid": "220fc877-8a5c-4826-885e-d27f1506f2c5", 00:10:23.040 "is_configured": true, 00:10:23.040 "data_offset": 0, 00:10:23.040 "data_size": 65536 00:10:23.040 }, 00:10:23.040 { 00:10:23.040 "name": null, 00:10:23.040 "uuid": "95763d5d-d60c-4475-851c-a8b6a22adaa1", 00:10:23.040 "is_configured": false, 00:10:23.040 "data_offset": 0, 00:10:23.040 "data_size": 65536 00:10:23.040 }, 00:10:23.040 { 00:10:23.040 "name": "BaseBdev3", 00:10:23.040 "uuid": "c23a3998-e5d0-4260-a94a-fb493ea4378c", 00:10:23.040 "is_configured": true, 00:10:23.040 "data_offset": 0, 00:10:23.040 "data_size": 65536 00:10:23.040 }, 00:10:23.040 { 00:10:23.040 "name": "BaseBdev4", 00:10:23.040 "uuid": "8c089435-876b-4268-931d-929278a9978f", 00:10:23.040 "is_configured": true, 00:10:23.040 "data_offset": 0, 00:10:23.040 "data_size": 65536 00:10:23.040 } 00:10:23.040 ] 00:10:23.040 }' 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.040 16:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.299 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.299 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:23.299 16:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.299 16:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.299 16:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.559 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:23.559 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:23.559 16:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.559 16:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.559 [2024-11-28 16:23:15.079590] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:23.559 16:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.559 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:23.559 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.559 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.559 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.559 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.559 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.559 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.559 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.559 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.559 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.559 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.559 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.559 16:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.559 16:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.559 16:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.559 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.559 "name": "Existed_Raid", 00:10:23.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.559 "strip_size_kb": 0, 00:10:23.559 "state": "configuring", 00:10:23.559 "raid_level": "raid1", 00:10:23.559 "superblock": false, 00:10:23.559 "num_base_bdevs": 4, 00:10:23.559 "num_base_bdevs_discovered": 2, 00:10:23.559 "num_base_bdevs_operational": 4, 00:10:23.559 "base_bdevs_list": [ 00:10:23.559 { 00:10:23.559 "name": null, 00:10:23.559 "uuid": "220fc877-8a5c-4826-885e-d27f1506f2c5", 00:10:23.559 "is_configured": false, 00:10:23.559 "data_offset": 0, 00:10:23.559 "data_size": 65536 00:10:23.559 }, 00:10:23.559 { 00:10:23.559 "name": null, 00:10:23.559 "uuid": "95763d5d-d60c-4475-851c-a8b6a22adaa1", 00:10:23.559 "is_configured": false, 00:10:23.559 "data_offset": 0, 00:10:23.559 "data_size": 65536 00:10:23.559 }, 00:10:23.559 { 00:10:23.559 "name": "BaseBdev3", 00:10:23.559 "uuid": "c23a3998-e5d0-4260-a94a-fb493ea4378c", 00:10:23.559 "is_configured": true, 00:10:23.559 "data_offset": 0, 00:10:23.559 "data_size": 65536 00:10:23.559 }, 00:10:23.559 { 00:10:23.559 "name": "BaseBdev4", 00:10:23.559 "uuid": "8c089435-876b-4268-931d-929278a9978f", 00:10:23.559 "is_configured": true, 00:10:23.559 "data_offset": 0, 00:10:23.559 "data_size": 65536 00:10:23.559 } 00:10:23.559 ] 00:10:23.559 }' 00:10:23.559 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.559 16:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.818 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:23.818 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.818 16:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.818 16:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.818 16:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.818 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:23.818 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:23.818 16:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.818 16:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.818 [2024-11-28 16:23:15.577132] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:23.818 16:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.818 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:23.818 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.818 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.818 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.818 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.818 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.818 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.818 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.818 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.818 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.076 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.076 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.076 16:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.076 16:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.076 16:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.076 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.076 "name": "Existed_Raid", 00:10:24.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.076 "strip_size_kb": 0, 00:10:24.076 "state": "configuring", 00:10:24.076 "raid_level": "raid1", 00:10:24.076 "superblock": false, 00:10:24.076 "num_base_bdevs": 4, 00:10:24.076 "num_base_bdevs_discovered": 3, 00:10:24.076 "num_base_bdevs_operational": 4, 00:10:24.076 "base_bdevs_list": [ 00:10:24.076 { 00:10:24.076 "name": null, 00:10:24.076 "uuid": "220fc877-8a5c-4826-885e-d27f1506f2c5", 00:10:24.076 "is_configured": false, 00:10:24.076 "data_offset": 0, 00:10:24.076 "data_size": 65536 00:10:24.076 }, 00:10:24.076 { 00:10:24.076 "name": "BaseBdev2", 00:10:24.076 "uuid": "95763d5d-d60c-4475-851c-a8b6a22adaa1", 00:10:24.076 "is_configured": true, 00:10:24.076 "data_offset": 0, 00:10:24.076 "data_size": 65536 00:10:24.076 }, 00:10:24.076 { 00:10:24.076 "name": "BaseBdev3", 00:10:24.076 "uuid": "c23a3998-e5d0-4260-a94a-fb493ea4378c", 00:10:24.076 "is_configured": true, 00:10:24.076 "data_offset": 0, 00:10:24.076 "data_size": 65536 00:10:24.076 }, 00:10:24.076 { 00:10:24.076 "name": "BaseBdev4", 00:10:24.076 "uuid": "8c089435-876b-4268-931d-929278a9978f", 00:10:24.076 "is_configured": true, 00:10:24.076 "data_offset": 0, 00:10:24.076 "data_size": 65536 00:10:24.076 } 00:10:24.076 ] 00:10:24.076 }' 00:10:24.077 16:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.077 16:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.336 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:24.336 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.336 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.336 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.336 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.336 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:24.336 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.336 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.336 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.336 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:24.336 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.336 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 220fc877-8a5c-4826-885e-d27f1506f2c5 00:10:24.336 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.336 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.596 [2024-11-28 16:23:16.111228] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:24.596 [2024-11-28 16:23:16.111344] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:24.596 [2024-11-28 16:23:16.111374] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:24.596 [2024-11-28 16:23:16.111649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:24.596 [2024-11-28 16:23:16.111850] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:24.596 [2024-11-28 16:23:16.111865] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:24.596 [2024-11-28 16:23:16.112047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.596 NewBaseBdev 00:10:24.596 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.596 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:24.596 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.597 [ 00:10:24.597 { 00:10:24.597 "name": "NewBaseBdev", 00:10:24.597 "aliases": [ 00:10:24.597 "220fc877-8a5c-4826-885e-d27f1506f2c5" 00:10:24.597 ], 00:10:24.597 "product_name": "Malloc disk", 00:10:24.597 "block_size": 512, 00:10:24.597 "num_blocks": 65536, 00:10:24.597 "uuid": "220fc877-8a5c-4826-885e-d27f1506f2c5", 00:10:24.597 "assigned_rate_limits": { 00:10:24.597 "rw_ios_per_sec": 0, 00:10:24.597 "rw_mbytes_per_sec": 0, 00:10:24.597 "r_mbytes_per_sec": 0, 00:10:24.597 "w_mbytes_per_sec": 0 00:10:24.597 }, 00:10:24.597 "claimed": true, 00:10:24.597 "claim_type": "exclusive_write", 00:10:24.597 "zoned": false, 00:10:24.597 "supported_io_types": { 00:10:24.597 "read": true, 00:10:24.597 "write": true, 00:10:24.597 "unmap": true, 00:10:24.597 "flush": true, 00:10:24.597 "reset": true, 00:10:24.597 "nvme_admin": false, 00:10:24.597 "nvme_io": false, 00:10:24.597 "nvme_io_md": false, 00:10:24.597 "write_zeroes": true, 00:10:24.597 "zcopy": true, 00:10:24.597 "get_zone_info": false, 00:10:24.597 "zone_management": false, 00:10:24.597 "zone_append": false, 00:10:24.597 "compare": false, 00:10:24.597 "compare_and_write": false, 00:10:24.597 "abort": true, 00:10:24.597 "seek_hole": false, 00:10:24.597 "seek_data": false, 00:10:24.597 "copy": true, 00:10:24.597 "nvme_iov_md": false 00:10:24.597 }, 00:10:24.597 "memory_domains": [ 00:10:24.597 { 00:10:24.597 "dma_device_id": "system", 00:10:24.597 "dma_device_type": 1 00:10:24.597 }, 00:10:24.597 { 00:10:24.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.597 "dma_device_type": 2 00:10:24.597 } 00:10:24.597 ], 00:10:24.597 "driver_specific": {} 00:10:24.597 } 00:10:24.597 ] 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.597 "name": "Existed_Raid", 00:10:24.597 "uuid": "407cce23-fa5e-43c6-807f-843e5ba7a6a2", 00:10:24.597 "strip_size_kb": 0, 00:10:24.597 "state": "online", 00:10:24.597 "raid_level": "raid1", 00:10:24.597 "superblock": false, 00:10:24.597 "num_base_bdevs": 4, 00:10:24.597 "num_base_bdevs_discovered": 4, 00:10:24.597 "num_base_bdevs_operational": 4, 00:10:24.597 "base_bdevs_list": [ 00:10:24.597 { 00:10:24.597 "name": "NewBaseBdev", 00:10:24.597 "uuid": "220fc877-8a5c-4826-885e-d27f1506f2c5", 00:10:24.597 "is_configured": true, 00:10:24.597 "data_offset": 0, 00:10:24.597 "data_size": 65536 00:10:24.597 }, 00:10:24.597 { 00:10:24.597 "name": "BaseBdev2", 00:10:24.597 "uuid": "95763d5d-d60c-4475-851c-a8b6a22adaa1", 00:10:24.597 "is_configured": true, 00:10:24.597 "data_offset": 0, 00:10:24.597 "data_size": 65536 00:10:24.597 }, 00:10:24.597 { 00:10:24.597 "name": "BaseBdev3", 00:10:24.597 "uuid": "c23a3998-e5d0-4260-a94a-fb493ea4378c", 00:10:24.597 "is_configured": true, 00:10:24.597 "data_offset": 0, 00:10:24.597 "data_size": 65536 00:10:24.597 }, 00:10:24.597 { 00:10:24.597 "name": "BaseBdev4", 00:10:24.597 "uuid": "8c089435-876b-4268-931d-929278a9978f", 00:10:24.597 "is_configured": true, 00:10:24.597 "data_offset": 0, 00:10:24.597 "data_size": 65536 00:10:24.597 } 00:10:24.597 ] 00:10:24.597 }' 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.597 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.856 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:24.856 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:24.856 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:24.856 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:24.856 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:24.856 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:24.856 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:24.856 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:24.856 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.856 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.856 [2024-11-28 16:23:16.590705] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.856 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.116 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:25.116 "name": "Existed_Raid", 00:10:25.116 "aliases": [ 00:10:25.116 "407cce23-fa5e-43c6-807f-843e5ba7a6a2" 00:10:25.116 ], 00:10:25.116 "product_name": "Raid Volume", 00:10:25.116 "block_size": 512, 00:10:25.116 "num_blocks": 65536, 00:10:25.116 "uuid": "407cce23-fa5e-43c6-807f-843e5ba7a6a2", 00:10:25.116 "assigned_rate_limits": { 00:10:25.116 "rw_ios_per_sec": 0, 00:10:25.116 "rw_mbytes_per_sec": 0, 00:10:25.116 "r_mbytes_per_sec": 0, 00:10:25.116 "w_mbytes_per_sec": 0 00:10:25.116 }, 00:10:25.116 "claimed": false, 00:10:25.116 "zoned": false, 00:10:25.116 "supported_io_types": { 00:10:25.116 "read": true, 00:10:25.116 "write": true, 00:10:25.116 "unmap": false, 00:10:25.116 "flush": false, 00:10:25.116 "reset": true, 00:10:25.116 "nvme_admin": false, 00:10:25.116 "nvme_io": false, 00:10:25.116 "nvme_io_md": false, 00:10:25.116 "write_zeroes": true, 00:10:25.116 "zcopy": false, 00:10:25.116 "get_zone_info": false, 00:10:25.116 "zone_management": false, 00:10:25.116 "zone_append": false, 00:10:25.116 "compare": false, 00:10:25.116 "compare_and_write": false, 00:10:25.116 "abort": false, 00:10:25.116 "seek_hole": false, 00:10:25.116 "seek_data": false, 00:10:25.116 "copy": false, 00:10:25.116 "nvme_iov_md": false 00:10:25.116 }, 00:10:25.116 "memory_domains": [ 00:10:25.116 { 00:10:25.116 "dma_device_id": "system", 00:10:25.116 "dma_device_type": 1 00:10:25.116 }, 00:10:25.116 { 00:10:25.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.116 "dma_device_type": 2 00:10:25.116 }, 00:10:25.116 { 00:10:25.116 "dma_device_id": "system", 00:10:25.116 "dma_device_type": 1 00:10:25.116 }, 00:10:25.116 { 00:10:25.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.116 "dma_device_type": 2 00:10:25.116 }, 00:10:25.116 { 00:10:25.116 "dma_device_id": "system", 00:10:25.116 "dma_device_type": 1 00:10:25.116 }, 00:10:25.116 { 00:10:25.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.116 "dma_device_type": 2 00:10:25.116 }, 00:10:25.116 { 00:10:25.116 "dma_device_id": "system", 00:10:25.116 "dma_device_type": 1 00:10:25.116 }, 00:10:25.116 { 00:10:25.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.116 "dma_device_type": 2 00:10:25.116 } 00:10:25.116 ], 00:10:25.116 "driver_specific": { 00:10:25.116 "raid": { 00:10:25.116 "uuid": "407cce23-fa5e-43c6-807f-843e5ba7a6a2", 00:10:25.116 "strip_size_kb": 0, 00:10:25.116 "state": "online", 00:10:25.116 "raid_level": "raid1", 00:10:25.116 "superblock": false, 00:10:25.116 "num_base_bdevs": 4, 00:10:25.116 "num_base_bdevs_discovered": 4, 00:10:25.116 "num_base_bdevs_operational": 4, 00:10:25.116 "base_bdevs_list": [ 00:10:25.116 { 00:10:25.116 "name": "NewBaseBdev", 00:10:25.116 "uuid": "220fc877-8a5c-4826-885e-d27f1506f2c5", 00:10:25.116 "is_configured": true, 00:10:25.116 "data_offset": 0, 00:10:25.116 "data_size": 65536 00:10:25.117 }, 00:10:25.117 { 00:10:25.117 "name": "BaseBdev2", 00:10:25.117 "uuid": "95763d5d-d60c-4475-851c-a8b6a22adaa1", 00:10:25.117 "is_configured": true, 00:10:25.117 "data_offset": 0, 00:10:25.117 "data_size": 65536 00:10:25.117 }, 00:10:25.117 { 00:10:25.117 "name": "BaseBdev3", 00:10:25.117 "uuid": "c23a3998-e5d0-4260-a94a-fb493ea4378c", 00:10:25.117 "is_configured": true, 00:10:25.117 "data_offset": 0, 00:10:25.117 "data_size": 65536 00:10:25.117 }, 00:10:25.117 { 00:10:25.117 "name": "BaseBdev4", 00:10:25.117 "uuid": "8c089435-876b-4268-931d-929278a9978f", 00:10:25.117 "is_configured": true, 00:10:25.117 "data_offset": 0, 00:10:25.117 "data_size": 65536 00:10:25.117 } 00:10:25.117 ] 00:10:25.117 } 00:10:25.117 } 00:10:25.117 }' 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:25.117 BaseBdev2 00:10:25.117 BaseBdev3 00:10:25.117 BaseBdev4' 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.117 [2024-11-28 16:23:16.873930] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:25.117 [2024-11-28 16:23:16.873992] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.117 [2024-11-28 16:23:16.874084] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.117 [2024-11-28 16:23:16.874347] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.117 [2024-11-28 16:23:16.874407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83989 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 83989 ']' 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 83989 00:10:25.117 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:25.378 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:25.378 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83989 00:10:25.378 killing process with pid 83989 00:10:25.378 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:25.378 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:25.378 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83989' 00:10:25.378 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 83989 00:10:25.378 [2024-11-28 16:23:16.922966] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:25.378 16:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 83989 00:10:25.378 [2024-11-28 16:23:16.964427] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:25.638 00:10:25.638 real 0m9.676s 00:10:25.638 user 0m16.562s 00:10:25.638 sys 0m2.074s 00:10:25.638 ************************************ 00:10:25.638 END TEST raid_state_function_test 00:10:25.638 ************************************ 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.638 16:23:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:10:25.638 16:23:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:25.638 16:23:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:25.638 16:23:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.638 ************************************ 00:10:25.638 START TEST raid_state_function_test_sb 00:10:25.638 ************************************ 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84638 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84638' 00:10:25.638 Process raid pid: 84638 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84638 00:10:25.638 16:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84638 ']' 00:10:25.639 16:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.639 16:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:25.639 16:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.639 16:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:25.639 16:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.639 [2024-11-28 16:23:17.384060] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:25.639 [2024-11-28 16:23:17.384210] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.899 [2024-11-28 16:23:17.549181] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.899 [2024-11-28 16:23:17.595267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.899 [2024-11-28 16:23:17.637597] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.899 [2024-11-28 16:23:17.637634] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.469 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:26.469 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:26.469 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:26.469 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.469 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.469 [2024-11-28 16:23:18.222924] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.469 [2024-11-28 16:23:18.222977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.469 [2024-11-28 16:23:18.222997] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.470 [2024-11-28 16:23:18.223008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.470 [2024-11-28 16:23:18.223016] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.470 [2024-11-28 16:23:18.223029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.470 [2024-11-28 16:23:18.223035] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:26.470 [2024-11-28 16:23:18.223043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:26.470 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.470 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:26.470 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.470 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.470 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.470 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.470 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.470 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.470 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.470 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.470 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.470 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.470 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.470 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.470 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.729 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.729 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.729 "name": "Existed_Raid", 00:10:26.729 "uuid": "7a00c0bf-efdd-45c3-a085-03b478e79414", 00:10:26.729 "strip_size_kb": 0, 00:10:26.729 "state": "configuring", 00:10:26.729 "raid_level": "raid1", 00:10:26.729 "superblock": true, 00:10:26.729 "num_base_bdevs": 4, 00:10:26.729 "num_base_bdevs_discovered": 0, 00:10:26.729 "num_base_bdevs_operational": 4, 00:10:26.729 "base_bdevs_list": [ 00:10:26.729 { 00:10:26.729 "name": "BaseBdev1", 00:10:26.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.729 "is_configured": false, 00:10:26.729 "data_offset": 0, 00:10:26.729 "data_size": 0 00:10:26.729 }, 00:10:26.729 { 00:10:26.729 "name": "BaseBdev2", 00:10:26.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.729 "is_configured": false, 00:10:26.729 "data_offset": 0, 00:10:26.729 "data_size": 0 00:10:26.729 }, 00:10:26.729 { 00:10:26.729 "name": "BaseBdev3", 00:10:26.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.729 "is_configured": false, 00:10:26.729 "data_offset": 0, 00:10:26.729 "data_size": 0 00:10:26.729 }, 00:10:26.729 { 00:10:26.729 "name": "BaseBdev4", 00:10:26.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.729 "is_configured": false, 00:10:26.729 "data_offset": 0, 00:10:26.729 "data_size": 0 00:10:26.729 } 00:10:26.729 ] 00:10:26.729 }' 00:10:26.729 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.729 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.989 [2024-11-28 16:23:18.693993] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.989 [2024-11-28 16:23:18.694081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.989 [2024-11-28 16:23:18.702008] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.989 [2024-11-28 16:23:18.702086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.989 [2024-11-28 16:23:18.702112] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.989 [2024-11-28 16:23:18.702134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.989 [2024-11-28 16:23:18.702151] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.989 [2024-11-28 16:23:18.702172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.989 [2024-11-28 16:23:18.702189] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:26.989 [2024-11-28 16:23:18.702208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.989 [2024-11-28 16:23:18.718885] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.989 BaseBdev1 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.989 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.989 [ 00:10:26.989 { 00:10:26.989 "name": "BaseBdev1", 00:10:26.989 "aliases": [ 00:10:26.989 "66e1a949-116f-4f18-96fe-3b691a12ab45" 00:10:26.989 ], 00:10:26.989 "product_name": "Malloc disk", 00:10:26.989 "block_size": 512, 00:10:26.989 "num_blocks": 65536, 00:10:26.989 "uuid": "66e1a949-116f-4f18-96fe-3b691a12ab45", 00:10:26.989 "assigned_rate_limits": { 00:10:26.989 "rw_ios_per_sec": 0, 00:10:26.989 "rw_mbytes_per_sec": 0, 00:10:26.989 "r_mbytes_per_sec": 0, 00:10:26.989 "w_mbytes_per_sec": 0 00:10:26.989 }, 00:10:26.989 "claimed": true, 00:10:26.989 "claim_type": "exclusive_write", 00:10:26.989 "zoned": false, 00:10:26.989 "supported_io_types": { 00:10:26.989 "read": true, 00:10:26.989 "write": true, 00:10:26.990 "unmap": true, 00:10:26.990 "flush": true, 00:10:26.990 "reset": true, 00:10:26.990 "nvme_admin": false, 00:10:26.990 "nvme_io": false, 00:10:26.990 "nvme_io_md": false, 00:10:26.990 "write_zeroes": true, 00:10:26.990 "zcopy": true, 00:10:26.990 "get_zone_info": false, 00:10:26.990 "zone_management": false, 00:10:26.990 "zone_append": false, 00:10:26.990 "compare": false, 00:10:26.990 "compare_and_write": false, 00:10:26.990 "abort": true, 00:10:26.990 "seek_hole": false, 00:10:26.990 "seek_data": false, 00:10:26.990 "copy": true, 00:10:26.990 "nvme_iov_md": false 00:10:26.990 }, 00:10:26.990 "memory_domains": [ 00:10:26.990 { 00:10:26.990 "dma_device_id": "system", 00:10:26.990 "dma_device_type": 1 00:10:26.990 }, 00:10:26.990 { 00:10:26.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.990 "dma_device_type": 2 00:10:26.990 } 00:10:26.990 ], 00:10:26.990 "driver_specific": {} 00:10:26.990 } 00:10:26.990 ] 00:10:26.990 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.990 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:26.990 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:26.990 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.990 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.990 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.990 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.990 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.990 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.990 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.990 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.990 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.250 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.250 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.250 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.250 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.250 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.250 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.250 "name": "Existed_Raid", 00:10:27.250 "uuid": "dc6f3d3b-df8a-413c-9581-6ad2700b07c8", 00:10:27.250 "strip_size_kb": 0, 00:10:27.250 "state": "configuring", 00:10:27.250 "raid_level": "raid1", 00:10:27.250 "superblock": true, 00:10:27.250 "num_base_bdevs": 4, 00:10:27.250 "num_base_bdevs_discovered": 1, 00:10:27.250 "num_base_bdevs_operational": 4, 00:10:27.250 "base_bdevs_list": [ 00:10:27.250 { 00:10:27.250 "name": "BaseBdev1", 00:10:27.250 "uuid": "66e1a949-116f-4f18-96fe-3b691a12ab45", 00:10:27.250 "is_configured": true, 00:10:27.250 "data_offset": 2048, 00:10:27.250 "data_size": 63488 00:10:27.250 }, 00:10:27.250 { 00:10:27.250 "name": "BaseBdev2", 00:10:27.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.250 "is_configured": false, 00:10:27.250 "data_offset": 0, 00:10:27.250 "data_size": 0 00:10:27.250 }, 00:10:27.250 { 00:10:27.250 "name": "BaseBdev3", 00:10:27.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.250 "is_configured": false, 00:10:27.250 "data_offset": 0, 00:10:27.250 "data_size": 0 00:10:27.250 }, 00:10:27.250 { 00:10:27.250 "name": "BaseBdev4", 00:10:27.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.250 "is_configured": false, 00:10:27.250 "data_offset": 0, 00:10:27.250 "data_size": 0 00:10:27.250 } 00:10:27.250 ] 00:10:27.250 }' 00:10:27.250 16:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.250 16:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.509 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:27.509 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.509 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.509 [2024-11-28 16:23:19.190066] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:27.509 [2024-11-28 16:23:19.190113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:27.509 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.509 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:27.509 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.509 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.509 [2024-11-28 16:23:19.198096] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.509 [2024-11-28 16:23:19.199886] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:27.509 [2024-11-28 16:23:19.199924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:27.509 [2024-11-28 16:23:19.199934] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:27.509 [2024-11-28 16:23:19.199957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:27.509 [2024-11-28 16:23:19.199964] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:27.509 [2024-11-28 16:23:19.199971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:27.510 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.510 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:27.510 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.510 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:27.510 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.510 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.510 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.510 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.510 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.510 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.510 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.510 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.510 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.510 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.510 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.510 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.510 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.510 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.510 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.510 "name": "Existed_Raid", 00:10:27.510 "uuid": "2b513031-6ec5-445c-9775-275abe6411c3", 00:10:27.510 "strip_size_kb": 0, 00:10:27.510 "state": "configuring", 00:10:27.510 "raid_level": "raid1", 00:10:27.510 "superblock": true, 00:10:27.510 "num_base_bdevs": 4, 00:10:27.510 "num_base_bdevs_discovered": 1, 00:10:27.510 "num_base_bdevs_operational": 4, 00:10:27.510 "base_bdevs_list": [ 00:10:27.510 { 00:10:27.510 "name": "BaseBdev1", 00:10:27.510 "uuid": "66e1a949-116f-4f18-96fe-3b691a12ab45", 00:10:27.510 "is_configured": true, 00:10:27.510 "data_offset": 2048, 00:10:27.510 "data_size": 63488 00:10:27.510 }, 00:10:27.510 { 00:10:27.510 "name": "BaseBdev2", 00:10:27.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.510 "is_configured": false, 00:10:27.510 "data_offset": 0, 00:10:27.510 "data_size": 0 00:10:27.510 }, 00:10:27.510 { 00:10:27.510 "name": "BaseBdev3", 00:10:27.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.510 "is_configured": false, 00:10:27.510 "data_offset": 0, 00:10:27.510 "data_size": 0 00:10:27.510 }, 00:10:27.510 { 00:10:27.510 "name": "BaseBdev4", 00:10:27.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.510 "is_configured": false, 00:10:27.510 "data_offset": 0, 00:10:27.510 "data_size": 0 00:10:27.510 } 00:10:27.510 ] 00:10:27.510 }' 00:10:27.510 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.510 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.079 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:28.079 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.079 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.079 [2024-11-28 16:23:19.628681] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.079 BaseBdev2 00:10:28.079 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.079 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:28.079 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:28.079 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:28.079 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:28.079 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:28.079 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:28.079 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:28.079 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.079 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.079 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.079 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:28.079 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.079 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.079 [ 00:10:28.079 { 00:10:28.079 "name": "BaseBdev2", 00:10:28.079 "aliases": [ 00:10:28.079 "363369cf-885e-49bc-857a-8bd1d4d848a9" 00:10:28.079 ], 00:10:28.079 "product_name": "Malloc disk", 00:10:28.079 "block_size": 512, 00:10:28.079 "num_blocks": 65536, 00:10:28.079 "uuid": "363369cf-885e-49bc-857a-8bd1d4d848a9", 00:10:28.079 "assigned_rate_limits": { 00:10:28.079 "rw_ios_per_sec": 0, 00:10:28.079 "rw_mbytes_per_sec": 0, 00:10:28.079 "r_mbytes_per_sec": 0, 00:10:28.079 "w_mbytes_per_sec": 0 00:10:28.079 }, 00:10:28.079 "claimed": true, 00:10:28.079 "claim_type": "exclusive_write", 00:10:28.079 "zoned": false, 00:10:28.079 "supported_io_types": { 00:10:28.079 "read": true, 00:10:28.079 "write": true, 00:10:28.079 "unmap": true, 00:10:28.079 "flush": true, 00:10:28.079 "reset": true, 00:10:28.079 "nvme_admin": false, 00:10:28.079 "nvme_io": false, 00:10:28.080 "nvme_io_md": false, 00:10:28.080 "write_zeroes": true, 00:10:28.080 "zcopy": true, 00:10:28.080 "get_zone_info": false, 00:10:28.080 "zone_management": false, 00:10:28.080 "zone_append": false, 00:10:28.080 "compare": false, 00:10:28.080 "compare_and_write": false, 00:10:28.080 "abort": true, 00:10:28.080 "seek_hole": false, 00:10:28.080 "seek_data": false, 00:10:28.080 "copy": true, 00:10:28.080 "nvme_iov_md": false 00:10:28.080 }, 00:10:28.080 "memory_domains": [ 00:10:28.080 { 00:10:28.080 "dma_device_id": "system", 00:10:28.080 "dma_device_type": 1 00:10:28.080 }, 00:10:28.080 { 00:10:28.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.080 "dma_device_type": 2 00:10:28.080 } 00:10:28.080 ], 00:10:28.080 "driver_specific": {} 00:10:28.080 } 00:10:28.080 ] 00:10:28.080 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.080 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:28.080 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:28.080 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:28.080 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:28.080 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.080 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.080 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.080 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.080 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.080 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.080 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.080 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.080 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.080 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.080 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.080 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.080 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.080 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.080 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.080 "name": "Existed_Raid", 00:10:28.080 "uuid": "2b513031-6ec5-445c-9775-275abe6411c3", 00:10:28.080 "strip_size_kb": 0, 00:10:28.080 "state": "configuring", 00:10:28.080 "raid_level": "raid1", 00:10:28.080 "superblock": true, 00:10:28.080 "num_base_bdevs": 4, 00:10:28.080 "num_base_bdevs_discovered": 2, 00:10:28.080 "num_base_bdevs_operational": 4, 00:10:28.080 "base_bdevs_list": [ 00:10:28.080 { 00:10:28.080 "name": "BaseBdev1", 00:10:28.080 "uuid": "66e1a949-116f-4f18-96fe-3b691a12ab45", 00:10:28.080 "is_configured": true, 00:10:28.080 "data_offset": 2048, 00:10:28.080 "data_size": 63488 00:10:28.080 }, 00:10:28.080 { 00:10:28.080 "name": "BaseBdev2", 00:10:28.080 "uuid": "363369cf-885e-49bc-857a-8bd1d4d848a9", 00:10:28.080 "is_configured": true, 00:10:28.080 "data_offset": 2048, 00:10:28.080 "data_size": 63488 00:10:28.080 }, 00:10:28.080 { 00:10:28.080 "name": "BaseBdev3", 00:10:28.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.080 "is_configured": false, 00:10:28.080 "data_offset": 0, 00:10:28.080 "data_size": 0 00:10:28.080 }, 00:10:28.080 { 00:10:28.080 "name": "BaseBdev4", 00:10:28.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.080 "is_configured": false, 00:10:28.080 "data_offset": 0, 00:10:28.080 "data_size": 0 00:10:28.080 } 00:10:28.080 ] 00:10:28.080 }' 00:10:28.080 16:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.080 16:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.503 [2024-11-28 16:23:20.102875] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:28.503 BaseBdev3 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.503 [ 00:10:28.503 { 00:10:28.503 "name": "BaseBdev3", 00:10:28.503 "aliases": [ 00:10:28.503 "7b73c37f-01b7-4de4-ab14-b95f039fb221" 00:10:28.503 ], 00:10:28.503 "product_name": "Malloc disk", 00:10:28.503 "block_size": 512, 00:10:28.503 "num_blocks": 65536, 00:10:28.503 "uuid": "7b73c37f-01b7-4de4-ab14-b95f039fb221", 00:10:28.503 "assigned_rate_limits": { 00:10:28.503 "rw_ios_per_sec": 0, 00:10:28.503 "rw_mbytes_per_sec": 0, 00:10:28.503 "r_mbytes_per_sec": 0, 00:10:28.503 "w_mbytes_per_sec": 0 00:10:28.503 }, 00:10:28.503 "claimed": true, 00:10:28.503 "claim_type": "exclusive_write", 00:10:28.503 "zoned": false, 00:10:28.503 "supported_io_types": { 00:10:28.503 "read": true, 00:10:28.503 "write": true, 00:10:28.503 "unmap": true, 00:10:28.503 "flush": true, 00:10:28.503 "reset": true, 00:10:28.503 "nvme_admin": false, 00:10:28.503 "nvme_io": false, 00:10:28.503 "nvme_io_md": false, 00:10:28.503 "write_zeroes": true, 00:10:28.503 "zcopy": true, 00:10:28.503 "get_zone_info": false, 00:10:28.503 "zone_management": false, 00:10:28.503 "zone_append": false, 00:10:28.503 "compare": false, 00:10:28.503 "compare_and_write": false, 00:10:28.503 "abort": true, 00:10:28.503 "seek_hole": false, 00:10:28.503 "seek_data": false, 00:10:28.503 "copy": true, 00:10:28.503 "nvme_iov_md": false 00:10:28.503 }, 00:10:28.503 "memory_domains": [ 00:10:28.503 { 00:10:28.503 "dma_device_id": "system", 00:10:28.503 "dma_device_type": 1 00:10:28.503 }, 00:10:28.503 { 00:10:28.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.503 "dma_device_type": 2 00:10:28.503 } 00:10:28.503 ], 00:10:28.503 "driver_specific": {} 00:10:28.503 } 00:10:28.503 ] 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.503 "name": "Existed_Raid", 00:10:28.503 "uuid": "2b513031-6ec5-445c-9775-275abe6411c3", 00:10:28.503 "strip_size_kb": 0, 00:10:28.503 "state": "configuring", 00:10:28.503 "raid_level": "raid1", 00:10:28.503 "superblock": true, 00:10:28.503 "num_base_bdevs": 4, 00:10:28.503 "num_base_bdevs_discovered": 3, 00:10:28.503 "num_base_bdevs_operational": 4, 00:10:28.503 "base_bdevs_list": [ 00:10:28.503 { 00:10:28.503 "name": "BaseBdev1", 00:10:28.503 "uuid": "66e1a949-116f-4f18-96fe-3b691a12ab45", 00:10:28.503 "is_configured": true, 00:10:28.503 "data_offset": 2048, 00:10:28.503 "data_size": 63488 00:10:28.503 }, 00:10:28.503 { 00:10:28.503 "name": "BaseBdev2", 00:10:28.503 "uuid": "363369cf-885e-49bc-857a-8bd1d4d848a9", 00:10:28.503 "is_configured": true, 00:10:28.503 "data_offset": 2048, 00:10:28.503 "data_size": 63488 00:10:28.503 }, 00:10:28.503 { 00:10:28.503 "name": "BaseBdev3", 00:10:28.503 "uuid": "7b73c37f-01b7-4de4-ab14-b95f039fb221", 00:10:28.503 "is_configured": true, 00:10:28.503 "data_offset": 2048, 00:10:28.503 "data_size": 63488 00:10:28.503 }, 00:10:28.503 { 00:10:28.503 "name": "BaseBdev4", 00:10:28.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.503 "is_configured": false, 00:10:28.503 "data_offset": 0, 00:10:28.503 "data_size": 0 00:10:28.503 } 00:10:28.503 ] 00:10:28.503 }' 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.503 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.074 [2024-11-28 16:23:20.577074] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:29.074 [2024-11-28 16:23:20.577340] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:29.074 [2024-11-28 16:23:20.577390] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:29.074 [2024-11-28 16:23:20.577686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:29.074 [2024-11-28 16:23:20.577876] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:29.074 [2024-11-28 16:23:20.577923] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:29.074 [2024-11-28 16:23:20.578100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.074 BaseBdev4 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.074 [ 00:10:29.074 { 00:10:29.074 "name": "BaseBdev4", 00:10:29.074 "aliases": [ 00:10:29.074 "69567967-e770-4a8e-b321-4e436de044b9" 00:10:29.074 ], 00:10:29.074 "product_name": "Malloc disk", 00:10:29.074 "block_size": 512, 00:10:29.074 "num_blocks": 65536, 00:10:29.074 "uuid": "69567967-e770-4a8e-b321-4e436de044b9", 00:10:29.074 "assigned_rate_limits": { 00:10:29.074 "rw_ios_per_sec": 0, 00:10:29.074 "rw_mbytes_per_sec": 0, 00:10:29.074 "r_mbytes_per_sec": 0, 00:10:29.074 "w_mbytes_per_sec": 0 00:10:29.074 }, 00:10:29.074 "claimed": true, 00:10:29.074 "claim_type": "exclusive_write", 00:10:29.074 "zoned": false, 00:10:29.074 "supported_io_types": { 00:10:29.074 "read": true, 00:10:29.074 "write": true, 00:10:29.074 "unmap": true, 00:10:29.074 "flush": true, 00:10:29.074 "reset": true, 00:10:29.074 "nvme_admin": false, 00:10:29.074 "nvme_io": false, 00:10:29.074 "nvme_io_md": false, 00:10:29.074 "write_zeroes": true, 00:10:29.074 "zcopy": true, 00:10:29.074 "get_zone_info": false, 00:10:29.074 "zone_management": false, 00:10:29.074 "zone_append": false, 00:10:29.074 "compare": false, 00:10:29.074 "compare_and_write": false, 00:10:29.074 "abort": true, 00:10:29.074 "seek_hole": false, 00:10:29.074 "seek_data": false, 00:10:29.074 "copy": true, 00:10:29.074 "nvme_iov_md": false 00:10:29.074 }, 00:10:29.074 "memory_domains": [ 00:10:29.074 { 00:10:29.074 "dma_device_id": "system", 00:10:29.074 "dma_device_type": 1 00:10:29.074 }, 00:10:29.074 { 00:10:29.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.074 "dma_device_type": 2 00:10:29.074 } 00:10:29.074 ], 00:10:29.074 "driver_specific": {} 00:10:29.074 } 00:10:29.074 ] 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.074 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.074 "name": "Existed_Raid", 00:10:29.074 "uuid": "2b513031-6ec5-445c-9775-275abe6411c3", 00:10:29.074 "strip_size_kb": 0, 00:10:29.074 "state": "online", 00:10:29.074 "raid_level": "raid1", 00:10:29.074 "superblock": true, 00:10:29.074 "num_base_bdevs": 4, 00:10:29.074 "num_base_bdevs_discovered": 4, 00:10:29.074 "num_base_bdevs_operational": 4, 00:10:29.075 "base_bdevs_list": [ 00:10:29.075 { 00:10:29.075 "name": "BaseBdev1", 00:10:29.075 "uuid": "66e1a949-116f-4f18-96fe-3b691a12ab45", 00:10:29.075 "is_configured": true, 00:10:29.075 "data_offset": 2048, 00:10:29.075 "data_size": 63488 00:10:29.075 }, 00:10:29.075 { 00:10:29.075 "name": "BaseBdev2", 00:10:29.075 "uuid": "363369cf-885e-49bc-857a-8bd1d4d848a9", 00:10:29.075 "is_configured": true, 00:10:29.075 "data_offset": 2048, 00:10:29.075 "data_size": 63488 00:10:29.075 }, 00:10:29.075 { 00:10:29.075 "name": "BaseBdev3", 00:10:29.075 "uuid": "7b73c37f-01b7-4de4-ab14-b95f039fb221", 00:10:29.075 "is_configured": true, 00:10:29.075 "data_offset": 2048, 00:10:29.075 "data_size": 63488 00:10:29.075 }, 00:10:29.075 { 00:10:29.075 "name": "BaseBdev4", 00:10:29.075 "uuid": "69567967-e770-4a8e-b321-4e436de044b9", 00:10:29.075 "is_configured": true, 00:10:29.075 "data_offset": 2048, 00:10:29.075 "data_size": 63488 00:10:29.075 } 00:10:29.075 ] 00:10:29.075 }' 00:10:29.075 16:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.075 16:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.335 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:29.335 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:29.335 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:29.335 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:29.335 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:29.335 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:29.335 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:29.335 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:29.335 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.335 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.335 [2024-11-28 16:23:21.080601] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.596 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.596 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:29.596 "name": "Existed_Raid", 00:10:29.596 "aliases": [ 00:10:29.596 "2b513031-6ec5-445c-9775-275abe6411c3" 00:10:29.596 ], 00:10:29.596 "product_name": "Raid Volume", 00:10:29.596 "block_size": 512, 00:10:29.596 "num_blocks": 63488, 00:10:29.596 "uuid": "2b513031-6ec5-445c-9775-275abe6411c3", 00:10:29.596 "assigned_rate_limits": { 00:10:29.596 "rw_ios_per_sec": 0, 00:10:29.596 "rw_mbytes_per_sec": 0, 00:10:29.596 "r_mbytes_per_sec": 0, 00:10:29.596 "w_mbytes_per_sec": 0 00:10:29.596 }, 00:10:29.596 "claimed": false, 00:10:29.596 "zoned": false, 00:10:29.596 "supported_io_types": { 00:10:29.596 "read": true, 00:10:29.596 "write": true, 00:10:29.596 "unmap": false, 00:10:29.596 "flush": false, 00:10:29.596 "reset": true, 00:10:29.596 "nvme_admin": false, 00:10:29.596 "nvme_io": false, 00:10:29.596 "nvme_io_md": false, 00:10:29.596 "write_zeroes": true, 00:10:29.596 "zcopy": false, 00:10:29.596 "get_zone_info": false, 00:10:29.596 "zone_management": false, 00:10:29.596 "zone_append": false, 00:10:29.596 "compare": false, 00:10:29.596 "compare_and_write": false, 00:10:29.596 "abort": false, 00:10:29.596 "seek_hole": false, 00:10:29.596 "seek_data": false, 00:10:29.596 "copy": false, 00:10:29.596 "nvme_iov_md": false 00:10:29.596 }, 00:10:29.596 "memory_domains": [ 00:10:29.596 { 00:10:29.596 "dma_device_id": "system", 00:10:29.596 "dma_device_type": 1 00:10:29.596 }, 00:10:29.596 { 00:10:29.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.596 "dma_device_type": 2 00:10:29.596 }, 00:10:29.596 { 00:10:29.596 "dma_device_id": "system", 00:10:29.596 "dma_device_type": 1 00:10:29.596 }, 00:10:29.596 { 00:10:29.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.596 "dma_device_type": 2 00:10:29.596 }, 00:10:29.596 { 00:10:29.596 "dma_device_id": "system", 00:10:29.596 "dma_device_type": 1 00:10:29.596 }, 00:10:29.596 { 00:10:29.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.596 "dma_device_type": 2 00:10:29.596 }, 00:10:29.596 { 00:10:29.596 "dma_device_id": "system", 00:10:29.596 "dma_device_type": 1 00:10:29.596 }, 00:10:29.596 { 00:10:29.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.596 "dma_device_type": 2 00:10:29.596 } 00:10:29.596 ], 00:10:29.596 "driver_specific": { 00:10:29.596 "raid": { 00:10:29.596 "uuid": "2b513031-6ec5-445c-9775-275abe6411c3", 00:10:29.596 "strip_size_kb": 0, 00:10:29.596 "state": "online", 00:10:29.596 "raid_level": "raid1", 00:10:29.596 "superblock": true, 00:10:29.596 "num_base_bdevs": 4, 00:10:29.596 "num_base_bdevs_discovered": 4, 00:10:29.596 "num_base_bdevs_operational": 4, 00:10:29.596 "base_bdevs_list": [ 00:10:29.596 { 00:10:29.596 "name": "BaseBdev1", 00:10:29.596 "uuid": "66e1a949-116f-4f18-96fe-3b691a12ab45", 00:10:29.596 "is_configured": true, 00:10:29.596 "data_offset": 2048, 00:10:29.596 "data_size": 63488 00:10:29.596 }, 00:10:29.596 { 00:10:29.596 "name": "BaseBdev2", 00:10:29.596 "uuid": "363369cf-885e-49bc-857a-8bd1d4d848a9", 00:10:29.596 "is_configured": true, 00:10:29.596 "data_offset": 2048, 00:10:29.596 "data_size": 63488 00:10:29.596 }, 00:10:29.596 { 00:10:29.596 "name": "BaseBdev3", 00:10:29.596 "uuid": "7b73c37f-01b7-4de4-ab14-b95f039fb221", 00:10:29.596 "is_configured": true, 00:10:29.596 "data_offset": 2048, 00:10:29.596 "data_size": 63488 00:10:29.596 }, 00:10:29.596 { 00:10:29.596 "name": "BaseBdev4", 00:10:29.596 "uuid": "69567967-e770-4a8e-b321-4e436de044b9", 00:10:29.596 "is_configured": true, 00:10:29.596 "data_offset": 2048, 00:10:29.596 "data_size": 63488 00:10:29.596 } 00:10:29.596 ] 00:10:29.596 } 00:10:29.596 } 00:10:29.596 }' 00:10:29.596 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:29.597 BaseBdev2 00:10:29.597 BaseBdev3 00:10:29.597 BaseBdev4' 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.597 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.856 [2024-11-28 16:23:21.363828] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:29.856 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.856 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:29.856 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:29.856 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:29.856 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:29.856 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:29.856 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:29.856 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.856 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.857 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.857 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.857 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.857 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.857 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.857 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.857 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.857 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.857 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.857 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.857 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.857 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.857 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.857 "name": "Existed_Raid", 00:10:29.857 "uuid": "2b513031-6ec5-445c-9775-275abe6411c3", 00:10:29.857 "strip_size_kb": 0, 00:10:29.857 "state": "online", 00:10:29.857 "raid_level": "raid1", 00:10:29.857 "superblock": true, 00:10:29.857 "num_base_bdevs": 4, 00:10:29.857 "num_base_bdevs_discovered": 3, 00:10:29.857 "num_base_bdevs_operational": 3, 00:10:29.857 "base_bdevs_list": [ 00:10:29.857 { 00:10:29.857 "name": null, 00:10:29.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.857 "is_configured": false, 00:10:29.857 "data_offset": 0, 00:10:29.857 "data_size": 63488 00:10:29.857 }, 00:10:29.857 { 00:10:29.857 "name": "BaseBdev2", 00:10:29.857 "uuid": "363369cf-885e-49bc-857a-8bd1d4d848a9", 00:10:29.857 "is_configured": true, 00:10:29.857 "data_offset": 2048, 00:10:29.857 "data_size": 63488 00:10:29.857 }, 00:10:29.857 { 00:10:29.857 "name": "BaseBdev3", 00:10:29.857 "uuid": "7b73c37f-01b7-4de4-ab14-b95f039fb221", 00:10:29.857 "is_configured": true, 00:10:29.857 "data_offset": 2048, 00:10:29.857 "data_size": 63488 00:10:29.857 }, 00:10:29.857 { 00:10:29.857 "name": "BaseBdev4", 00:10:29.857 "uuid": "69567967-e770-4a8e-b321-4e436de044b9", 00:10:29.857 "is_configured": true, 00:10:29.857 "data_offset": 2048, 00:10:29.857 "data_size": 63488 00:10:29.857 } 00:10:29.857 ] 00:10:29.857 }' 00:10:29.857 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.857 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.116 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:30.116 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.116 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.116 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.116 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.116 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:30.116 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.116 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:30.116 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:30.116 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:30.116 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.116 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.116 [2024-11-28 16:23:21.862324] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:30.116 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.116 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:30.116 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.116 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:30.116 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.116 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.116 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.375 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.375 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:30.375 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:30.375 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:30.375 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.375 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.376 [2024-11-28 16:23:21.933452] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:30.376 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.376 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:30.376 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.376 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:30.376 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.376 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.376 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.376 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.376 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:30.376 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:30.376 16:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:30.376 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.376 16:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.376 [2024-11-28 16:23:22.000578] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:30.376 [2024-11-28 16:23:22.000726] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.376 [2024-11-28 16:23:22.012003] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.376 [2024-11-28 16:23:22.012117] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.376 [2024-11-28 16:23:22.012158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.376 BaseBdev2 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.376 [ 00:10:30.376 { 00:10:30.376 "name": "BaseBdev2", 00:10:30.376 "aliases": [ 00:10:30.376 "7f9073a9-9f8e-4e6c-ac30-d928afe7a145" 00:10:30.376 ], 00:10:30.376 "product_name": "Malloc disk", 00:10:30.376 "block_size": 512, 00:10:30.376 "num_blocks": 65536, 00:10:30.376 "uuid": "7f9073a9-9f8e-4e6c-ac30-d928afe7a145", 00:10:30.376 "assigned_rate_limits": { 00:10:30.376 "rw_ios_per_sec": 0, 00:10:30.376 "rw_mbytes_per_sec": 0, 00:10:30.376 "r_mbytes_per_sec": 0, 00:10:30.376 "w_mbytes_per_sec": 0 00:10:30.376 }, 00:10:30.376 "claimed": false, 00:10:30.376 "zoned": false, 00:10:30.376 "supported_io_types": { 00:10:30.376 "read": true, 00:10:30.376 "write": true, 00:10:30.376 "unmap": true, 00:10:30.376 "flush": true, 00:10:30.376 "reset": true, 00:10:30.376 "nvme_admin": false, 00:10:30.376 "nvme_io": false, 00:10:30.376 "nvme_io_md": false, 00:10:30.376 "write_zeroes": true, 00:10:30.376 "zcopy": true, 00:10:30.376 "get_zone_info": false, 00:10:30.376 "zone_management": false, 00:10:30.376 "zone_append": false, 00:10:30.376 "compare": false, 00:10:30.376 "compare_and_write": false, 00:10:30.376 "abort": true, 00:10:30.376 "seek_hole": false, 00:10:30.376 "seek_data": false, 00:10:30.376 "copy": true, 00:10:30.376 "nvme_iov_md": false 00:10:30.376 }, 00:10:30.376 "memory_domains": [ 00:10:30.376 { 00:10:30.376 "dma_device_id": "system", 00:10:30.376 "dma_device_type": 1 00:10:30.376 }, 00:10:30.376 { 00:10:30.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.376 "dma_device_type": 2 00:10:30.376 } 00:10:30.376 ], 00:10:30.376 "driver_specific": {} 00:10:30.376 } 00:10:30.376 ] 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.376 BaseBdev3 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:30.376 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.636 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.636 [ 00:10:30.636 { 00:10:30.636 "name": "BaseBdev3", 00:10:30.636 "aliases": [ 00:10:30.636 "548601ef-b9c4-4c72-b80e-c8fb72f757d2" 00:10:30.636 ], 00:10:30.636 "product_name": "Malloc disk", 00:10:30.636 "block_size": 512, 00:10:30.636 "num_blocks": 65536, 00:10:30.637 "uuid": "548601ef-b9c4-4c72-b80e-c8fb72f757d2", 00:10:30.637 "assigned_rate_limits": { 00:10:30.637 "rw_ios_per_sec": 0, 00:10:30.637 "rw_mbytes_per_sec": 0, 00:10:30.637 "r_mbytes_per_sec": 0, 00:10:30.637 "w_mbytes_per_sec": 0 00:10:30.637 }, 00:10:30.637 "claimed": false, 00:10:30.637 "zoned": false, 00:10:30.637 "supported_io_types": { 00:10:30.637 "read": true, 00:10:30.637 "write": true, 00:10:30.637 "unmap": true, 00:10:30.637 "flush": true, 00:10:30.637 "reset": true, 00:10:30.637 "nvme_admin": false, 00:10:30.637 "nvme_io": false, 00:10:30.637 "nvme_io_md": false, 00:10:30.637 "write_zeroes": true, 00:10:30.637 "zcopy": true, 00:10:30.637 "get_zone_info": false, 00:10:30.637 "zone_management": false, 00:10:30.637 "zone_append": false, 00:10:30.637 "compare": false, 00:10:30.637 "compare_and_write": false, 00:10:30.637 "abort": true, 00:10:30.637 "seek_hole": false, 00:10:30.637 "seek_data": false, 00:10:30.637 "copy": true, 00:10:30.637 "nvme_iov_md": false 00:10:30.637 }, 00:10:30.637 "memory_domains": [ 00:10:30.637 { 00:10:30.637 "dma_device_id": "system", 00:10:30.637 "dma_device_type": 1 00:10:30.637 }, 00:10:30.637 { 00:10:30.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.637 "dma_device_type": 2 00:10:30.637 } 00:10:30.637 ], 00:10:30.637 "driver_specific": {} 00:10:30.637 } 00:10:30.637 ] 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.637 BaseBdev4 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.637 [ 00:10:30.637 { 00:10:30.637 "name": "BaseBdev4", 00:10:30.637 "aliases": [ 00:10:30.637 "f9e1bf76-dd4d-47a7-8a1c-ef44993b46c1" 00:10:30.637 ], 00:10:30.637 "product_name": "Malloc disk", 00:10:30.637 "block_size": 512, 00:10:30.637 "num_blocks": 65536, 00:10:30.637 "uuid": "f9e1bf76-dd4d-47a7-8a1c-ef44993b46c1", 00:10:30.637 "assigned_rate_limits": { 00:10:30.637 "rw_ios_per_sec": 0, 00:10:30.637 "rw_mbytes_per_sec": 0, 00:10:30.637 "r_mbytes_per_sec": 0, 00:10:30.637 "w_mbytes_per_sec": 0 00:10:30.637 }, 00:10:30.637 "claimed": false, 00:10:30.637 "zoned": false, 00:10:30.637 "supported_io_types": { 00:10:30.637 "read": true, 00:10:30.637 "write": true, 00:10:30.637 "unmap": true, 00:10:30.637 "flush": true, 00:10:30.637 "reset": true, 00:10:30.637 "nvme_admin": false, 00:10:30.637 "nvme_io": false, 00:10:30.637 "nvme_io_md": false, 00:10:30.637 "write_zeroes": true, 00:10:30.637 "zcopy": true, 00:10:30.637 "get_zone_info": false, 00:10:30.637 "zone_management": false, 00:10:30.637 "zone_append": false, 00:10:30.637 "compare": false, 00:10:30.637 "compare_and_write": false, 00:10:30.637 "abort": true, 00:10:30.637 "seek_hole": false, 00:10:30.637 "seek_data": false, 00:10:30.637 "copy": true, 00:10:30.637 "nvme_iov_md": false 00:10:30.637 }, 00:10:30.637 "memory_domains": [ 00:10:30.637 { 00:10:30.637 "dma_device_id": "system", 00:10:30.637 "dma_device_type": 1 00:10:30.637 }, 00:10:30.637 { 00:10:30.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.637 "dma_device_type": 2 00:10:30.637 } 00:10:30.637 ], 00:10:30.637 "driver_specific": {} 00:10:30.637 } 00:10:30.637 ] 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.637 [2024-11-28 16:23:22.227605] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:30.637 [2024-11-28 16:23:22.227655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:30.637 [2024-11-28 16:23:22.227694] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.637 [2024-11-28 16:23:22.229454] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.637 [2024-11-28 16:23:22.229543] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.637 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.637 "name": "Existed_Raid", 00:10:30.637 "uuid": "b1932127-e22d-4473-8142-0084358b3ae1", 00:10:30.637 "strip_size_kb": 0, 00:10:30.637 "state": "configuring", 00:10:30.637 "raid_level": "raid1", 00:10:30.637 "superblock": true, 00:10:30.637 "num_base_bdevs": 4, 00:10:30.638 "num_base_bdevs_discovered": 3, 00:10:30.638 "num_base_bdevs_operational": 4, 00:10:30.638 "base_bdevs_list": [ 00:10:30.638 { 00:10:30.638 "name": "BaseBdev1", 00:10:30.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.638 "is_configured": false, 00:10:30.638 "data_offset": 0, 00:10:30.638 "data_size": 0 00:10:30.638 }, 00:10:30.638 { 00:10:30.638 "name": "BaseBdev2", 00:10:30.638 "uuid": "7f9073a9-9f8e-4e6c-ac30-d928afe7a145", 00:10:30.638 "is_configured": true, 00:10:30.638 "data_offset": 2048, 00:10:30.638 "data_size": 63488 00:10:30.638 }, 00:10:30.638 { 00:10:30.638 "name": "BaseBdev3", 00:10:30.638 "uuid": "548601ef-b9c4-4c72-b80e-c8fb72f757d2", 00:10:30.638 "is_configured": true, 00:10:30.638 "data_offset": 2048, 00:10:30.638 "data_size": 63488 00:10:30.638 }, 00:10:30.638 { 00:10:30.638 "name": "BaseBdev4", 00:10:30.638 "uuid": "f9e1bf76-dd4d-47a7-8a1c-ef44993b46c1", 00:10:30.638 "is_configured": true, 00:10:30.638 "data_offset": 2048, 00:10:30.638 "data_size": 63488 00:10:30.638 } 00:10:30.638 ] 00:10:30.638 }' 00:10:30.638 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.638 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.207 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:31.207 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.207 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.207 [2024-11-28 16:23:22.678823] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:31.207 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.207 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:31.207 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.207 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.207 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.207 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.207 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.207 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.207 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.207 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.207 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.207 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.207 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.207 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.207 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.207 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.207 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.207 "name": "Existed_Raid", 00:10:31.207 "uuid": "b1932127-e22d-4473-8142-0084358b3ae1", 00:10:31.207 "strip_size_kb": 0, 00:10:31.207 "state": "configuring", 00:10:31.207 "raid_level": "raid1", 00:10:31.207 "superblock": true, 00:10:31.207 "num_base_bdevs": 4, 00:10:31.207 "num_base_bdevs_discovered": 2, 00:10:31.207 "num_base_bdevs_operational": 4, 00:10:31.207 "base_bdevs_list": [ 00:10:31.207 { 00:10:31.207 "name": "BaseBdev1", 00:10:31.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.207 "is_configured": false, 00:10:31.207 "data_offset": 0, 00:10:31.207 "data_size": 0 00:10:31.207 }, 00:10:31.207 { 00:10:31.207 "name": null, 00:10:31.207 "uuid": "7f9073a9-9f8e-4e6c-ac30-d928afe7a145", 00:10:31.207 "is_configured": false, 00:10:31.207 "data_offset": 0, 00:10:31.207 "data_size": 63488 00:10:31.207 }, 00:10:31.207 { 00:10:31.207 "name": "BaseBdev3", 00:10:31.207 "uuid": "548601ef-b9c4-4c72-b80e-c8fb72f757d2", 00:10:31.207 "is_configured": true, 00:10:31.207 "data_offset": 2048, 00:10:31.207 "data_size": 63488 00:10:31.207 }, 00:10:31.207 { 00:10:31.207 "name": "BaseBdev4", 00:10:31.207 "uuid": "f9e1bf76-dd4d-47a7-8a1c-ef44993b46c1", 00:10:31.207 "is_configured": true, 00:10:31.207 "data_offset": 2048, 00:10:31.207 "data_size": 63488 00:10:31.207 } 00:10:31.207 ] 00:10:31.207 }' 00:10:31.207 16:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.207 16:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.467 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.467 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:31.467 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.467 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.467 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.467 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:31.467 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:31.467 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.467 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.467 [2024-11-28 16:23:23.129063] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.467 BaseBdev1 00:10:31.467 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.467 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.468 [ 00:10:31.468 { 00:10:31.468 "name": "BaseBdev1", 00:10:31.468 "aliases": [ 00:10:31.468 "59cb7c5c-4ab1-4f29-9926-488f08fc8e6e" 00:10:31.468 ], 00:10:31.468 "product_name": "Malloc disk", 00:10:31.468 "block_size": 512, 00:10:31.468 "num_blocks": 65536, 00:10:31.468 "uuid": "59cb7c5c-4ab1-4f29-9926-488f08fc8e6e", 00:10:31.468 "assigned_rate_limits": { 00:10:31.468 "rw_ios_per_sec": 0, 00:10:31.468 "rw_mbytes_per_sec": 0, 00:10:31.468 "r_mbytes_per_sec": 0, 00:10:31.468 "w_mbytes_per_sec": 0 00:10:31.468 }, 00:10:31.468 "claimed": true, 00:10:31.468 "claim_type": "exclusive_write", 00:10:31.468 "zoned": false, 00:10:31.468 "supported_io_types": { 00:10:31.468 "read": true, 00:10:31.468 "write": true, 00:10:31.468 "unmap": true, 00:10:31.468 "flush": true, 00:10:31.468 "reset": true, 00:10:31.468 "nvme_admin": false, 00:10:31.468 "nvme_io": false, 00:10:31.468 "nvme_io_md": false, 00:10:31.468 "write_zeroes": true, 00:10:31.468 "zcopy": true, 00:10:31.468 "get_zone_info": false, 00:10:31.468 "zone_management": false, 00:10:31.468 "zone_append": false, 00:10:31.468 "compare": false, 00:10:31.468 "compare_and_write": false, 00:10:31.468 "abort": true, 00:10:31.468 "seek_hole": false, 00:10:31.468 "seek_data": false, 00:10:31.468 "copy": true, 00:10:31.468 "nvme_iov_md": false 00:10:31.468 }, 00:10:31.468 "memory_domains": [ 00:10:31.468 { 00:10:31.468 "dma_device_id": "system", 00:10:31.468 "dma_device_type": 1 00:10:31.468 }, 00:10:31.468 { 00:10:31.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.468 "dma_device_type": 2 00:10:31.468 } 00:10:31.468 ], 00:10:31.468 "driver_specific": {} 00:10:31.468 } 00:10:31.468 ] 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.468 "name": "Existed_Raid", 00:10:31.468 "uuid": "b1932127-e22d-4473-8142-0084358b3ae1", 00:10:31.468 "strip_size_kb": 0, 00:10:31.468 "state": "configuring", 00:10:31.468 "raid_level": "raid1", 00:10:31.468 "superblock": true, 00:10:31.468 "num_base_bdevs": 4, 00:10:31.468 "num_base_bdevs_discovered": 3, 00:10:31.468 "num_base_bdevs_operational": 4, 00:10:31.468 "base_bdevs_list": [ 00:10:31.468 { 00:10:31.468 "name": "BaseBdev1", 00:10:31.468 "uuid": "59cb7c5c-4ab1-4f29-9926-488f08fc8e6e", 00:10:31.468 "is_configured": true, 00:10:31.468 "data_offset": 2048, 00:10:31.468 "data_size": 63488 00:10:31.468 }, 00:10:31.468 { 00:10:31.468 "name": null, 00:10:31.468 "uuid": "7f9073a9-9f8e-4e6c-ac30-d928afe7a145", 00:10:31.468 "is_configured": false, 00:10:31.468 "data_offset": 0, 00:10:31.468 "data_size": 63488 00:10:31.468 }, 00:10:31.468 { 00:10:31.468 "name": "BaseBdev3", 00:10:31.468 "uuid": "548601ef-b9c4-4c72-b80e-c8fb72f757d2", 00:10:31.468 "is_configured": true, 00:10:31.468 "data_offset": 2048, 00:10:31.468 "data_size": 63488 00:10:31.468 }, 00:10:31.468 { 00:10:31.468 "name": "BaseBdev4", 00:10:31.468 "uuid": "f9e1bf76-dd4d-47a7-8a1c-ef44993b46c1", 00:10:31.468 "is_configured": true, 00:10:31.468 "data_offset": 2048, 00:10:31.468 "data_size": 63488 00:10:31.468 } 00:10:31.468 ] 00:10:31.468 }' 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.468 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.038 [2024-11-28 16:23:23.700122] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.038 "name": "Existed_Raid", 00:10:32.038 "uuid": "b1932127-e22d-4473-8142-0084358b3ae1", 00:10:32.038 "strip_size_kb": 0, 00:10:32.038 "state": "configuring", 00:10:32.038 "raid_level": "raid1", 00:10:32.038 "superblock": true, 00:10:32.038 "num_base_bdevs": 4, 00:10:32.038 "num_base_bdevs_discovered": 2, 00:10:32.038 "num_base_bdevs_operational": 4, 00:10:32.038 "base_bdevs_list": [ 00:10:32.038 { 00:10:32.038 "name": "BaseBdev1", 00:10:32.038 "uuid": "59cb7c5c-4ab1-4f29-9926-488f08fc8e6e", 00:10:32.038 "is_configured": true, 00:10:32.038 "data_offset": 2048, 00:10:32.038 "data_size": 63488 00:10:32.038 }, 00:10:32.038 { 00:10:32.038 "name": null, 00:10:32.038 "uuid": "7f9073a9-9f8e-4e6c-ac30-d928afe7a145", 00:10:32.038 "is_configured": false, 00:10:32.038 "data_offset": 0, 00:10:32.038 "data_size": 63488 00:10:32.038 }, 00:10:32.038 { 00:10:32.038 "name": null, 00:10:32.038 "uuid": "548601ef-b9c4-4c72-b80e-c8fb72f757d2", 00:10:32.038 "is_configured": false, 00:10:32.038 "data_offset": 0, 00:10:32.038 "data_size": 63488 00:10:32.038 }, 00:10:32.038 { 00:10:32.038 "name": "BaseBdev4", 00:10:32.038 "uuid": "f9e1bf76-dd4d-47a7-8a1c-ef44993b46c1", 00:10:32.038 "is_configured": true, 00:10:32.038 "data_offset": 2048, 00:10:32.038 "data_size": 63488 00:10:32.038 } 00:10:32.038 ] 00:10:32.038 }' 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.038 16:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.608 [2024-11-28 16:23:24.143406] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.608 "name": "Existed_Raid", 00:10:32.608 "uuid": "b1932127-e22d-4473-8142-0084358b3ae1", 00:10:32.608 "strip_size_kb": 0, 00:10:32.608 "state": "configuring", 00:10:32.608 "raid_level": "raid1", 00:10:32.608 "superblock": true, 00:10:32.608 "num_base_bdevs": 4, 00:10:32.608 "num_base_bdevs_discovered": 3, 00:10:32.608 "num_base_bdevs_operational": 4, 00:10:32.608 "base_bdevs_list": [ 00:10:32.608 { 00:10:32.608 "name": "BaseBdev1", 00:10:32.608 "uuid": "59cb7c5c-4ab1-4f29-9926-488f08fc8e6e", 00:10:32.608 "is_configured": true, 00:10:32.608 "data_offset": 2048, 00:10:32.608 "data_size": 63488 00:10:32.608 }, 00:10:32.608 { 00:10:32.608 "name": null, 00:10:32.608 "uuid": "7f9073a9-9f8e-4e6c-ac30-d928afe7a145", 00:10:32.608 "is_configured": false, 00:10:32.608 "data_offset": 0, 00:10:32.608 "data_size": 63488 00:10:32.608 }, 00:10:32.608 { 00:10:32.608 "name": "BaseBdev3", 00:10:32.608 "uuid": "548601ef-b9c4-4c72-b80e-c8fb72f757d2", 00:10:32.608 "is_configured": true, 00:10:32.608 "data_offset": 2048, 00:10:32.608 "data_size": 63488 00:10:32.608 }, 00:10:32.608 { 00:10:32.608 "name": "BaseBdev4", 00:10:32.608 "uuid": "f9e1bf76-dd4d-47a7-8a1c-ef44993b46c1", 00:10:32.608 "is_configured": true, 00:10:32.608 "data_offset": 2048, 00:10:32.608 "data_size": 63488 00:10:32.608 } 00:10:32.608 ] 00:10:32.608 }' 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.608 16:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.868 [2024-11-28 16:23:24.602614] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.868 16:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.128 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.128 "name": "Existed_Raid", 00:10:33.128 "uuid": "b1932127-e22d-4473-8142-0084358b3ae1", 00:10:33.128 "strip_size_kb": 0, 00:10:33.128 "state": "configuring", 00:10:33.128 "raid_level": "raid1", 00:10:33.128 "superblock": true, 00:10:33.128 "num_base_bdevs": 4, 00:10:33.128 "num_base_bdevs_discovered": 2, 00:10:33.128 "num_base_bdevs_operational": 4, 00:10:33.128 "base_bdevs_list": [ 00:10:33.128 { 00:10:33.128 "name": null, 00:10:33.128 "uuid": "59cb7c5c-4ab1-4f29-9926-488f08fc8e6e", 00:10:33.128 "is_configured": false, 00:10:33.128 "data_offset": 0, 00:10:33.128 "data_size": 63488 00:10:33.128 }, 00:10:33.128 { 00:10:33.128 "name": null, 00:10:33.128 "uuid": "7f9073a9-9f8e-4e6c-ac30-d928afe7a145", 00:10:33.128 "is_configured": false, 00:10:33.128 "data_offset": 0, 00:10:33.128 "data_size": 63488 00:10:33.128 }, 00:10:33.128 { 00:10:33.128 "name": "BaseBdev3", 00:10:33.128 "uuid": "548601ef-b9c4-4c72-b80e-c8fb72f757d2", 00:10:33.128 "is_configured": true, 00:10:33.128 "data_offset": 2048, 00:10:33.128 "data_size": 63488 00:10:33.128 }, 00:10:33.128 { 00:10:33.128 "name": "BaseBdev4", 00:10:33.128 "uuid": "f9e1bf76-dd4d-47a7-8a1c-ef44993b46c1", 00:10:33.128 "is_configured": true, 00:10:33.128 "data_offset": 2048, 00:10:33.128 "data_size": 63488 00:10:33.128 } 00:10:33.128 ] 00:10:33.128 }' 00:10:33.128 16:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.128 16:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.388 [2024-11-28 16:23:25.132128] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.388 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.647 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.647 "name": "Existed_Raid", 00:10:33.647 "uuid": "b1932127-e22d-4473-8142-0084358b3ae1", 00:10:33.647 "strip_size_kb": 0, 00:10:33.647 "state": "configuring", 00:10:33.647 "raid_level": "raid1", 00:10:33.647 "superblock": true, 00:10:33.647 "num_base_bdevs": 4, 00:10:33.647 "num_base_bdevs_discovered": 3, 00:10:33.647 "num_base_bdevs_operational": 4, 00:10:33.647 "base_bdevs_list": [ 00:10:33.647 { 00:10:33.647 "name": null, 00:10:33.647 "uuid": "59cb7c5c-4ab1-4f29-9926-488f08fc8e6e", 00:10:33.647 "is_configured": false, 00:10:33.647 "data_offset": 0, 00:10:33.647 "data_size": 63488 00:10:33.647 }, 00:10:33.647 { 00:10:33.647 "name": "BaseBdev2", 00:10:33.647 "uuid": "7f9073a9-9f8e-4e6c-ac30-d928afe7a145", 00:10:33.647 "is_configured": true, 00:10:33.647 "data_offset": 2048, 00:10:33.647 "data_size": 63488 00:10:33.647 }, 00:10:33.647 { 00:10:33.647 "name": "BaseBdev3", 00:10:33.647 "uuid": "548601ef-b9c4-4c72-b80e-c8fb72f757d2", 00:10:33.647 "is_configured": true, 00:10:33.647 "data_offset": 2048, 00:10:33.647 "data_size": 63488 00:10:33.647 }, 00:10:33.647 { 00:10:33.647 "name": "BaseBdev4", 00:10:33.647 "uuid": "f9e1bf76-dd4d-47a7-8a1c-ef44993b46c1", 00:10:33.647 "is_configured": true, 00:10:33.647 "data_offset": 2048, 00:10:33.647 "data_size": 63488 00:10:33.647 } 00:10:33.647 ] 00:10:33.647 }' 00:10:33.647 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.647 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.906 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.906 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.906 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.906 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:33.906 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.906 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:33.906 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.906 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.906 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:33.906 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.906 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.906 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 59cb7c5c-4ab1-4f29-9926-488f08fc8e6e 00:10:33.906 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.906 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.906 [2024-11-28 16:23:25.662221] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:33.906 [2024-11-28 16:23:25.662418] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:33.907 [2024-11-28 16:23:25.662436] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:33.907 [2024-11-28 16:23:25.662669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:33.907 [2024-11-28 16:23:25.662804] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:33.907 [2024-11-28 16:23:25.662819] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:33.907 [2024-11-28 16:23:25.662931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.907 NewBaseBdev 00:10:33.907 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.907 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:33.907 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:33.907 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:33.907 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:33.907 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:33.907 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:33.907 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:33.907 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.907 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.907 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.907 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:33.907 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.907 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.167 [ 00:10:34.167 { 00:10:34.167 "name": "NewBaseBdev", 00:10:34.167 "aliases": [ 00:10:34.167 "59cb7c5c-4ab1-4f29-9926-488f08fc8e6e" 00:10:34.167 ], 00:10:34.167 "product_name": "Malloc disk", 00:10:34.167 "block_size": 512, 00:10:34.167 "num_blocks": 65536, 00:10:34.167 "uuid": "59cb7c5c-4ab1-4f29-9926-488f08fc8e6e", 00:10:34.167 "assigned_rate_limits": { 00:10:34.167 "rw_ios_per_sec": 0, 00:10:34.167 "rw_mbytes_per_sec": 0, 00:10:34.167 "r_mbytes_per_sec": 0, 00:10:34.167 "w_mbytes_per_sec": 0 00:10:34.167 }, 00:10:34.167 "claimed": true, 00:10:34.167 "claim_type": "exclusive_write", 00:10:34.167 "zoned": false, 00:10:34.167 "supported_io_types": { 00:10:34.167 "read": true, 00:10:34.167 "write": true, 00:10:34.167 "unmap": true, 00:10:34.167 "flush": true, 00:10:34.167 "reset": true, 00:10:34.167 "nvme_admin": false, 00:10:34.167 "nvme_io": false, 00:10:34.167 "nvme_io_md": false, 00:10:34.167 "write_zeroes": true, 00:10:34.167 "zcopy": true, 00:10:34.167 "get_zone_info": false, 00:10:34.167 "zone_management": false, 00:10:34.167 "zone_append": false, 00:10:34.167 "compare": false, 00:10:34.167 "compare_and_write": false, 00:10:34.167 "abort": true, 00:10:34.167 "seek_hole": false, 00:10:34.167 "seek_data": false, 00:10:34.167 "copy": true, 00:10:34.167 "nvme_iov_md": false 00:10:34.167 }, 00:10:34.167 "memory_domains": [ 00:10:34.167 { 00:10:34.167 "dma_device_id": "system", 00:10:34.167 "dma_device_type": 1 00:10:34.167 }, 00:10:34.167 { 00:10:34.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.167 "dma_device_type": 2 00:10:34.167 } 00:10:34.167 ], 00:10:34.167 "driver_specific": {} 00:10:34.167 } 00:10:34.167 ] 00:10:34.167 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.167 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:34.167 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:34.167 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.167 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.167 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.167 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.167 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.167 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.167 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.167 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.167 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.167 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.167 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.167 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.167 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.167 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.167 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.167 "name": "Existed_Raid", 00:10:34.167 "uuid": "b1932127-e22d-4473-8142-0084358b3ae1", 00:10:34.167 "strip_size_kb": 0, 00:10:34.167 "state": "online", 00:10:34.167 "raid_level": "raid1", 00:10:34.167 "superblock": true, 00:10:34.167 "num_base_bdevs": 4, 00:10:34.167 "num_base_bdevs_discovered": 4, 00:10:34.167 "num_base_bdevs_operational": 4, 00:10:34.167 "base_bdevs_list": [ 00:10:34.167 { 00:10:34.167 "name": "NewBaseBdev", 00:10:34.167 "uuid": "59cb7c5c-4ab1-4f29-9926-488f08fc8e6e", 00:10:34.167 "is_configured": true, 00:10:34.167 "data_offset": 2048, 00:10:34.167 "data_size": 63488 00:10:34.167 }, 00:10:34.167 { 00:10:34.167 "name": "BaseBdev2", 00:10:34.167 "uuid": "7f9073a9-9f8e-4e6c-ac30-d928afe7a145", 00:10:34.167 "is_configured": true, 00:10:34.167 "data_offset": 2048, 00:10:34.167 "data_size": 63488 00:10:34.167 }, 00:10:34.167 { 00:10:34.167 "name": "BaseBdev3", 00:10:34.167 "uuid": "548601ef-b9c4-4c72-b80e-c8fb72f757d2", 00:10:34.167 "is_configured": true, 00:10:34.167 "data_offset": 2048, 00:10:34.167 "data_size": 63488 00:10:34.167 }, 00:10:34.167 { 00:10:34.167 "name": "BaseBdev4", 00:10:34.167 "uuid": "f9e1bf76-dd4d-47a7-8a1c-ef44993b46c1", 00:10:34.167 "is_configured": true, 00:10:34.167 "data_offset": 2048, 00:10:34.167 "data_size": 63488 00:10:34.167 } 00:10:34.167 ] 00:10:34.167 }' 00:10:34.167 16:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.167 16:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.428 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:34.428 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:34.428 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.428 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.428 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.428 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.428 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:34.428 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.428 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.428 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.428 [2024-11-28 16:23:26.101800] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.428 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.428 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.428 "name": "Existed_Raid", 00:10:34.428 "aliases": [ 00:10:34.428 "b1932127-e22d-4473-8142-0084358b3ae1" 00:10:34.428 ], 00:10:34.428 "product_name": "Raid Volume", 00:10:34.428 "block_size": 512, 00:10:34.428 "num_blocks": 63488, 00:10:34.428 "uuid": "b1932127-e22d-4473-8142-0084358b3ae1", 00:10:34.428 "assigned_rate_limits": { 00:10:34.428 "rw_ios_per_sec": 0, 00:10:34.428 "rw_mbytes_per_sec": 0, 00:10:34.428 "r_mbytes_per_sec": 0, 00:10:34.428 "w_mbytes_per_sec": 0 00:10:34.428 }, 00:10:34.428 "claimed": false, 00:10:34.428 "zoned": false, 00:10:34.428 "supported_io_types": { 00:10:34.428 "read": true, 00:10:34.428 "write": true, 00:10:34.428 "unmap": false, 00:10:34.428 "flush": false, 00:10:34.428 "reset": true, 00:10:34.428 "nvme_admin": false, 00:10:34.428 "nvme_io": false, 00:10:34.428 "nvme_io_md": false, 00:10:34.428 "write_zeroes": true, 00:10:34.428 "zcopy": false, 00:10:34.428 "get_zone_info": false, 00:10:34.428 "zone_management": false, 00:10:34.428 "zone_append": false, 00:10:34.428 "compare": false, 00:10:34.428 "compare_and_write": false, 00:10:34.428 "abort": false, 00:10:34.428 "seek_hole": false, 00:10:34.428 "seek_data": false, 00:10:34.428 "copy": false, 00:10:34.428 "nvme_iov_md": false 00:10:34.428 }, 00:10:34.428 "memory_domains": [ 00:10:34.428 { 00:10:34.428 "dma_device_id": "system", 00:10:34.428 "dma_device_type": 1 00:10:34.428 }, 00:10:34.428 { 00:10:34.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.428 "dma_device_type": 2 00:10:34.428 }, 00:10:34.428 { 00:10:34.428 "dma_device_id": "system", 00:10:34.428 "dma_device_type": 1 00:10:34.428 }, 00:10:34.428 { 00:10:34.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.428 "dma_device_type": 2 00:10:34.428 }, 00:10:34.428 { 00:10:34.428 "dma_device_id": "system", 00:10:34.428 "dma_device_type": 1 00:10:34.428 }, 00:10:34.428 { 00:10:34.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.428 "dma_device_type": 2 00:10:34.428 }, 00:10:34.428 { 00:10:34.428 "dma_device_id": "system", 00:10:34.428 "dma_device_type": 1 00:10:34.428 }, 00:10:34.428 { 00:10:34.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.428 "dma_device_type": 2 00:10:34.428 } 00:10:34.428 ], 00:10:34.428 "driver_specific": { 00:10:34.428 "raid": { 00:10:34.428 "uuid": "b1932127-e22d-4473-8142-0084358b3ae1", 00:10:34.428 "strip_size_kb": 0, 00:10:34.428 "state": "online", 00:10:34.428 "raid_level": "raid1", 00:10:34.428 "superblock": true, 00:10:34.428 "num_base_bdevs": 4, 00:10:34.428 "num_base_bdevs_discovered": 4, 00:10:34.429 "num_base_bdevs_operational": 4, 00:10:34.429 "base_bdevs_list": [ 00:10:34.429 { 00:10:34.429 "name": "NewBaseBdev", 00:10:34.429 "uuid": "59cb7c5c-4ab1-4f29-9926-488f08fc8e6e", 00:10:34.429 "is_configured": true, 00:10:34.429 "data_offset": 2048, 00:10:34.429 "data_size": 63488 00:10:34.429 }, 00:10:34.429 { 00:10:34.429 "name": "BaseBdev2", 00:10:34.429 "uuid": "7f9073a9-9f8e-4e6c-ac30-d928afe7a145", 00:10:34.429 "is_configured": true, 00:10:34.429 "data_offset": 2048, 00:10:34.429 "data_size": 63488 00:10:34.429 }, 00:10:34.429 { 00:10:34.429 "name": "BaseBdev3", 00:10:34.429 "uuid": "548601ef-b9c4-4c72-b80e-c8fb72f757d2", 00:10:34.429 "is_configured": true, 00:10:34.429 "data_offset": 2048, 00:10:34.429 "data_size": 63488 00:10:34.429 }, 00:10:34.429 { 00:10:34.429 "name": "BaseBdev4", 00:10:34.429 "uuid": "f9e1bf76-dd4d-47a7-8a1c-ef44993b46c1", 00:10:34.429 "is_configured": true, 00:10:34.429 "data_offset": 2048, 00:10:34.429 "data_size": 63488 00:10:34.429 } 00:10:34.429 ] 00:10:34.429 } 00:10:34.429 } 00:10:34.429 }' 00:10:34.429 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.429 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:34.429 BaseBdev2 00:10:34.429 BaseBdev3 00:10:34.429 BaseBdev4' 00:10:34.429 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.689 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.689 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.689 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:34.689 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.689 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.689 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.689 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.690 [2024-11-28 16:23:26.420919] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.690 [2024-11-28 16:23:26.420946] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.690 [2024-11-28 16:23:26.421014] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.690 [2024-11-28 16:23:26.421261] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:34.690 [2024-11-28 16:23:26.421302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84638 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84638 ']' 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 84638 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:34.690 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84638 00:10:34.950 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:34.950 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:34.950 killing process with pid 84638 00:10:34.950 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84638' 00:10:34.950 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 84638 00:10:34.950 [2024-11-28 16:23:26.461334] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:34.950 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 84638 00:10:34.950 [2024-11-28 16:23:26.501436] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:35.210 16:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:35.211 00:10:35.211 real 0m9.465s 00:10:35.211 user 0m16.190s 00:10:35.211 sys 0m2.025s 00:10:35.211 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.211 16:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.211 ************************************ 00:10:35.211 END TEST raid_state_function_test_sb 00:10:35.211 ************************************ 00:10:35.211 16:23:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:10:35.211 16:23:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:35.211 16:23:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.211 16:23:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:35.211 ************************************ 00:10:35.211 START TEST raid_superblock_test 00:10:35.211 ************************************ 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85287 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85287 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 85287 ']' 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:35.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:35.211 16:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.211 [2024-11-28 16:23:26.905141] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:35.211 [2024-11-28 16:23:26.905254] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85287 ] 00:10:35.470 [2024-11-28 16:23:27.065379] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.471 [2024-11-28 16:23:27.112274] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.471 [2024-11-28 16:23:27.154819] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.471 [2024-11-28 16:23:27.154873] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.041 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:36.041 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:36.041 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:36.041 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:36.041 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:36.041 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:36.041 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:36.041 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:36.041 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:36.041 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:36.041 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:36.041 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.041 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.041 malloc1 00:10:36.041 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.042 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:36.042 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.042 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.042 [2024-11-28 16:23:27.769105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:36.042 [2024-11-28 16:23:27.769201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.042 [2024-11-28 16:23:27.769223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:36.042 [2024-11-28 16:23:27.769237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.042 [2024-11-28 16:23:27.771325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.042 [2024-11-28 16:23:27.771361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:36.042 pt1 00:10:36.042 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.042 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:36.042 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:36.042 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:36.042 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:36.042 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:36.042 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:36.042 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:36.042 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:36.042 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:36.042 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.042 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.042 malloc2 00:10:36.042 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.042 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:36.042 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.042 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.042 [2024-11-28 16:23:27.810979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:36.042 [2024-11-28 16:23:27.811052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.042 [2024-11-28 16:23:27.811074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:36.042 [2024-11-28 16:23:27.811090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.303 [2024-11-28 16:23:27.814016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.303 [2024-11-28 16:23:27.814066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:36.303 pt2 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.303 malloc3 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.303 [2024-11-28 16:23:27.839443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:36.303 [2024-11-28 16:23:27.839508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.303 [2024-11-28 16:23:27.839527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:36.303 [2024-11-28 16:23:27.839537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.303 [2024-11-28 16:23:27.841550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.303 [2024-11-28 16:23:27.841584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:36.303 pt3 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.303 malloc4 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.303 [2024-11-28 16:23:27.868068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:36.303 [2024-11-28 16:23:27.868118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.303 [2024-11-28 16:23:27.868133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:36.303 [2024-11-28 16:23:27.868147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.303 [2024-11-28 16:23:27.870107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.303 [2024-11-28 16:23:27.870144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:36.303 pt4 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.303 [2024-11-28 16:23:27.880135] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:36.303 [2024-11-28 16:23:27.881820] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:36.303 [2024-11-28 16:23:27.881887] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:36.303 [2024-11-28 16:23:27.881926] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:36.303 [2024-11-28 16:23:27.882075] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:36.303 [2024-11-28 16:23:27.882088] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:36.303 [2024-11-28 16:23:27.882340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:36.303 [2024-11-28 16:23:27.882501] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:36.303 [2024-11-28 16:23:27.882522] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:36.303 [2024-11-28 16:23:27.882636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.303 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.304 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.304 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.304 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.304 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.304 "name": "raid_bdev1", 00:10:36.304 "uuid": "ebf30d5a-7048-4aa5-84ac-a3060173314d", 00:10:36.304 "strip_size_kb": 0, 00:10:36.304 "state": "online", 00:10:36.304 "raid_level": "raid1", 00:10:36.304 "superblock": true, 00:10:36.304 "num_base_bdevs": 4, 00:10:36.304 "num_base_bdevs_discovered": 4, 00:10:36.304 "num_base_bdevs_operational": 4, 00:10:36.304 "base_bdevs_list": [ 00:10:36.304 { 00:10:36.304 "name": "pt1", 00:10:36.304 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:36.304 "is_configured": true, 00:10:36.304 "data_offset": 2048, 00:10:36.304 "data_size": 63488 00:10:36.304 }, 00:10:36.304 { 00:10:36.304 "name": "pt2", 00:10:36.304 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:36.304 "is_configured": true, 00:10:36.304 "data_offset": 2048, 00:10:36.304 "data_size": 63488 00:10:36.304 }, 00:10:36.304 { 00:10:36.304 "name": "pt3", 00:10:36.304 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:36.304 "is_configured": true, 00:10:36.304 "data_offset": 2048, 00:10:36.304 "data_size": 63488 00:10:36.304 }, 00:10:36.304 { 00:10:36.304 "name": "pt4", 00:10:36.304 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:36.304 "is_configured": true, 00:10:36.304 "data_offset": 2048, 00:10:36.304 "data_size": 63488 00:10:36.304 } 00:10:36.304 ] 00:10:36.304 }' 00:10:36.304 16:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.304 16:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.564 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:36.564 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:36.564 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:36.564 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:36.564 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:36.564 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:36.564 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:36.564 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:36.564 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.564 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.824 [2024-11-28 16:23:28.335588] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.824 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.824 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:36.824 "name": "raid_bdev1", 00:10:36.824 "aliases": [ 00:10:36.824 "ebf30d5a-7048-4aa5-84ac-a3060173314d" 00:10:36.824 ], 00:10:36.824 "product_name": "Raid Volume", 00:10:36.824 "block_size": 512, 00:10:36.824 "num_blocks": 63488, 00:10:36.824 "uuid": "ebf30d5a-7048-4aa5-84ac-a3060173314d", 00:10:36.824 "assigned_rate_limits": { 00:10:36.824 "rw_ios_per_sec": 0, 00:10:36.824 "rw_mbytes_per_sec": 0, 00:10:36.824 "r_mbytes_per_sec": 0, 00:10:36.824 "w_mbytes_per_sec": 0 00:10:36.824 }, 00:10:36.824 "claimed": false, 00:10:36.824 "zoned": false, 00:10:36.824 "supported_io_types": { 00:10:36.824 "read": true, 00:10:36.824 "write": true, 00:10:36.824 "unmap": false, 00:10:36.824 "flush": false, 00:10:36.824 "reset": true, 00:10:36.824 "nvme_admin": false, 00:10:36.824 "nvme_io": false, 00:10:36.824 "nvme_io_md": false, 00:10:36.824 "write_zeroes": true, 00:10:36.824 "zcopy": false, 00:10:36.824 "get_zone_info": false, 00:10:36.824 "zone_management": false, 00:10:36.824 "zone_append": false, 00:10:36.824 "compare": false, 00:10:36.824 "compare_and_write": false, 00:10:36.824 "abort": false, 00:10:36.824 "seek_hole": false, 00:10:36.824 "seek_data": false, 00:10:36.824 "copy": false, 00:10:36.824 "nvme_iov_md": false 00:10:36.824 }, 00:10:36.824 "memory_domains": [ 00:10:36.824 { 00:10:36.824 "dma_device_id": "system", 00:10:36.824 "dma_device_type": 1 00:10:36.824 }, 00:10:36.824 { 00:10:36.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.824 "dma_device_type": 2 00:10:36.824 }, 00:10:36.824 { 00:10:36.824 "dma_device_id": "system", 00:10:36.824 "dma_device_type": 1 00:10:36.824 }, 00:10:36.824 { 00:10:36.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.824 "dma_device_type": 2 00:10:36.824 }, 00:10:36.824 { 00:10:36.824 "dma_device_id": "system", 00:10:36.824 "dma_device_type": 1 00:10:36.824 }, 00:10:36.824 { 00:10:36.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.824 "dma_device_type": 2 00:10:36.824 }, 00:10:36.824 { 00:10:36.824 "dma_device_id": "system", 00:10:36.824 "dma_device_type": 1 00:10:36.824 }, 00:10:36.824 { 00:10:36.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.824 "dma_device_type": 2 00:10:36.824 } 00:10:36.824 ], 00:10:36.824 "driver_specific": { 00:10:36.824 "raid": { 00:10:36.825 "uuid": "ebf30d5a-7048-4aa5-84ac-a3060173314d", 00:10:36.825 "strip_size_kb": 0, 00:10:36.825 "state": "online", 00:10:36.825 "raid_level": "raid1", 00:10:36.825 "superblock": true, 00:10:36.825 "num_base_bdevs": 4, 00:10:36.825 "num_base_bdevs_discovered": 4, 00:10:36.825 "num_base_bdevs_operational": 4, 00:10:36.825 "base_bdevs_list": [ 00:10:36.825 { 00:10:36.825 "name": "pt1", 00:10:36.825 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:36.825 "is_configured": true, 00:10:36.825 "data_offset": 2048, 00:10:36.825 "data_size": 63488 00:10:36.825 }, 00:10:36.825 { 00:10:36.825 "name": "pt2", 00:10:36.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:36.825 "is_configured": true, 00:10:36.825 "data_offset": 2048, 00:10:36.825 "data_size": 63488 00:10:36.825 }, 00:10:36.825 { 00:10:36.825 "name": "pt3", 00:10:36.825 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:36.825 "is_configured": true, 00:10:36.825 "data_offset": 2048, 00:10:36.825 "data_size": 63488 00:10:36.825 }, 00:10:36.825 { 00:10:36.825 "name": "pt4", 00:10:36.825 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:36.825 "is_configured": true, 00:10:36.825 "data_offset": 2048, 00:10:36.825 "data_size": 63488 00:10:36.825 } 00:10:36.825 ] 00:10:36.825 } 00:10:36.825 } 00:10:36.825 }' 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:36.825 pt2 00:10:36.825 pt3 00:10:36.825 pt4' 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.825 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.085 [2024-11-28 16:23:28.623086] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ebf30d5a-7048-4aa5-84ac-a3060173314d 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ebf30d5a-7048-4aa5-84ac-a3060173314d ']' 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.085 [2024-11-28 16:23:28.666713] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:37.085 [2024-11-28 16:23:28.666747] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.085 [2024-11-28 16:23:28.666815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.085 [2024-11-28 16:23:28.666909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:37.085 [2024-11-28 16:23:28.666920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:37.085 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.086 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.086 [2024-11-28 16:23:28.814519] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:37.086 [2024-11-28 16:23:28.816391] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:37.086 [2024-11-28 16:23:28.816449] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:37.086 [2024-11-28 16:23:28.816478] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:37.086 [2024-11-28 16:23:28.816524] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:37.086 [2024-11-28 16:23:28.816569] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:37.086 [2024-11-28 16:23:28.816606] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:37.086 [2024-11-28 16:23:28.816623] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:37.086 [2024-11-28 16:23:28.816638] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:37.086 [2024-11-28 16:23:28.816647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:37.086 request: 00:10:37.086 { 00:10:37.086 "name": "raid_bdev1", 00:10:37.086 "raid_level": "raid1", 00:10:37.086 "base_bdevs": [ 00:10:37.086 "malloc1", 00:10:37.086 "malloc2", 00:10:37.086 "malloc3", 00:10:37.086 "malloc4" 00:10:37.086 ], 00:10:37.086 "superblock": false, 00:10:37.086 "method": "bdev_raid_create", 00:10:37.086 "req_id": 1 00:10:37.086 } 00:10:37.086 Got JSON-RPC error response 00:10:37.086 response: 00:10:37.086 { 00:10:37.086 "code": -17, 00:10:37.086 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:37.086 } 00:10:37.086 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:37.086 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:37.086 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:37.086 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:37.086 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:37.086 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.086 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.086 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.086 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:37.086 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.345 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:37.345 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:37.345 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:37.345 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.345 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.346 [2024-11-28 16:23:28.874354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:37.346 [2024-11-28 16:23:28.874408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.346 [2024-11-28 16:23:28.874443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:37.346 [2024-11-28 16:23:28.874452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.346 [2024-11-28 16:23:28.876566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.346 [2024-11-28 16:23:28.876605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:37.346 [2024-11-28 16:23:28.876677] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:37.346 [2024-11-28 16:23:28.876709] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:37.346 pt1 00:10:37.346 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.346 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:37.346 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.346 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.346 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.346 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.346 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.346 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.346 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.346 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.346 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.346 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.346 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.346 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.346 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.346 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.346 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.346 "name": "raid_bdev1", 00:10:37.346 "uuid": "ebf30d5a-7048-4aa5-84ac-a3060173314d", 00:10:37.346 "strip_size_kb": 0, 00:10:37.346 "state": "configuring", 00:10:37.346 "raid_level": "raid1", 00:10:37.346 "superblock": true, 00:10:37.346 "num_base_bdevs": 4, 00:10:37.346 "num_base_bdevs_discovered": 1, 00:10:37.346 "num_base_bdevs_operational": 4, 00:10:37.346 "base_bdevs_list": [ 00:10:37.346 { 00:10:37.346 "name": "pt1", 00:10:37.346 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:37.346 "is_configured": true, 00:10:37.346 "data_offset": 2048, 00:10:37.346 "data_size": 63488 00:10:37.346 }, 00:10:37.346 { 00:10:37.346 "name": null, 00:10:37.346 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:37.346 "is_configured": false, 00:10:37.346 "data_offset": 2048, 00:10:37.346 "data_size": 63488 00:10:37.346 }, 00:10:37.346 { 00:10:37.346 "name": null, 00:10:37.346 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:37.346 "is_configured": false, 00:10:37.346 "data_offset": 2048, 00:10:37.346 "data_size": 63488 00:10:37.346 }, 00:10:37.346 { 00:10:37.346 "name": null, 00:10:37.346 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:37.346 "is_configured": false, 00:10:37.346 "data_offset": 2048, 00:10:37.346 "data_size": 63488 00:10:37.346 } 00:10:37.346 ] 00:10:37.346 }' 00:10:37.346 16:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.346 16:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.606 [2024-11-28 16:23:29.329618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:37.606 [2024-11-28 16:23:29.329680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.606 [2024-11-28 16:23:29.329702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:37.606 [2024-11-28 16:23:29.329711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.606 [2024-11-28 16:23:29.330107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.606 [2024-11-28 16:23:29.330134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:37.606 [2024-11-28 16:23:29.330209] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:37.606 [2024-11-28 16:23:29.330235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:37.606 pt2 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.606 [2024-11-28 16:23:29.337614] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.606 16:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.866 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.866 "name": "raid_bdev1", 00:10:37.866 "uuid": "ebf30d5a-7048-4aa5-84ac-a3060173314d", 00:10:37.866 "strip_size_kb": 0, 00:10:37.866 "state": "configuring", 00:10:37.866 "raid_level": "raid1", 00:10:37.866 "superblock": true, 00:10:37.866 "num_base_bdevs": 4, 00:10:37.866 "num_base_bdevs_discovered": 1, 00:10:37.866 "num_base_bdevs_operational": 4, 00:10:37.866 "base_bdevs_list": [ 00:10:37.866 { 00:10:37.866 "name": "pt1", 00:10:37.866 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:37.866 "is_configured": true, 00:10:37.866 "data_offset": 2048, 00:10:37.866 "data_size": 63488 00:10:37.866 }, 00:10:37.866 { 00:10:37.866 "name": null, 00:10:37.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:37.866 "is_configured": false, 00:10:37.866 "data_offset": 0, 00:10:37.866 "data_size": 63488 00:10:37.866 }, 00:10:37.866 { 00:10:37.866 "name": null, 00:10:37.866 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:37.866 "is_configured": false, 00:10:37.866 "data_offset": 2048, 00:10:37.866 "data_size": 63488 00:10:37.866 }, 00:10:37.866 { 00:10:37.866 "name": null, 00:10:37.866 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:37.866 "is_configured": false, 00:10:37.866 "data_offset": 2048, 00:10:37.866 "data_size": 63488 00:10:37.866 } 00:10:37.866 ] 00:10:37.866 }' 00:10:37.866 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.866 16:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.125 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:38.125 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:38.125 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:38.125 16:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.125 16:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.125 [2024-11-28 16:23:29.792815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:38.125 [2024-11-28 16:23:29.792887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.125 [2024-11-28 16:23:29.792904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:38.125 [2024-11-28 16:23:29.792913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.125 [2024-11-28 16:23:29.793258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.125 [2024-11-28 16:23:29.793288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:38.125 [2024-11-28 16:23:29.793352] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:38.125 [2024-11-28 16:23:29.793375] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:38.125 pt2 00:10:38.125 16:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.125 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:38.125 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:38.125 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:38.125 16:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.125 16:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.125 [2024-11-28 16:23:29.804767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:38.125 [2024-11-28 16:23:29.804826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.126 [2024-11-28 16:23:29.804874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:38.126 [2024-11-28 16:23:29.804884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.126 [2024-11-28 16:23:29.805204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.126 [2024-11-28 16:23:29.805233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:38.126 [2024-11-28 16:23:29.805289] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:38.126 [2024-11-28 16:23:29.805307] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:38.126 pt3 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.126 [2024-11-28 16:23:29.816758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:38.126 [2024-11-28 16:23:29.816810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.126 [2024-11-28 16:23:29.816823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:38.126 [2024-11-28 16:23:29.816841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.126 [2024-11-28 16:23:29.817124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.126 [2024-11-28 16:23:29.817147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:38.126 [2024-11-28 16:23:29.817193] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:38.126 [2024-11-28 16:23:29.817211] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:38.126 [2024-11-28 16:23:29.817313] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:38.126 [2024-11-28 16:23:29.817337] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:38.126 [2024-11-28 16:23:29.817561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:38.126 [2024-11-28 16:23:29.817684] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:38.126 [2024-11-28 16:23:29.817698] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:38.126 [2024-11-28 16:23:29.817796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.126 pt4 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.126 "name": "raid_bdev1", 00:10:38.126 "uuid": "ebf30d5a-7048-4aa5-84ac-a3060173314d", 00:10:38.126 "strip_size_kb": 0, 00:10:38.126 "state": "online", 00:10:38.126 "raid_level": "raid1", 00:10:38.126 "superblock": true, 00:10:38.126 "num_base_bdevs": 4, 00:10:38.126 "num_base_bdevs_discovered": 4, 00:10:38.126 "num_base_bdevs_operational": 4, 00:10:38.126 "base_bdevs_list": [ 00:10:38.126 { 00:10:38.126 "name": "pt1", 00:10:38.126 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:38.126 "is_configured": true, 00:10:38.126 "data_offset": 2048, 00:10:38.126 "data_size": 63488 00:10:38.126 }, 00:10:38.126 { 00:10:38.126 "name": "pt2", 00:10:38.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.126 "is_configured": true, 00:10:38.126 "data_offset": 2048, 00:10:38.126 "data_size": 63488 00:10:38.126 }, 00:10:38.126 { 00:10:38.126 "name": "pt3", 00:10:38.126 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:38.126 "is_configured": true, 00:10:38.126 "data_offset": 2048, 00:10:38.126 "data_size": 63488 00:10:38.126 }, 00:10:38.126 { 00:10:38.126 "name": "pt4", 00:10:38.126 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:38.126 "is_configured": true, 00:10:38.126 "data_offset": 2048, 00:10:38.126 "data_size": 63488 00:10:38.126 } 00:10:38.126 ] 00:10:38.126 }' 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.126 16:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.694 [2024-11-28 16:23:30.284293] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:38.694 "name": "raid_bdev1", 00:10:38.694 "aliases": [ 00:10:38.694 "ebf30d5a-7048-4aa5-84ac-a3060173314d" 00:10:38.694 ], 00:10:38.694 "product_name": "Raid Volume", 00:10:38.694 "block_size": 512, 00:10:38.694 "num_blocks": 63488, 00:10:38.694 "uuid": "ebf30d5a-7048-4aa5-84ac-a3060173314d", 00:10:38.694 "assigned_rate_limits": { 00:10:38.694 "rw_ios_per_sec": 0, 00:10:38.694 "rw_mbytes_per_sec": 0, 00:10:38.694 "r_mbytes_per_sec": 0, 00:10:38.694 "w_mbytes_per_sec": 0 00:10:38.694 }, 00:10:38.694 "claimed": false, 00:10:38.694 "zoned": false, 00:10:38.694 "supported_io_types": { 00:10:38.694 "read": true, 00:10:38.694 "write": true, 00:10:38.694 "unmap": false, 00:10:38.694 "flush": false, 00:10:38.694 "reset": true, 00:10:38.694 "nvme_admin": false, 00:10:38.694 "nvme_io": false, 00:10:38.694 "nvme_io_md": false, 00:10:38.694 "write_zeroes": true, 00:10:38.694 "zcopy": false, 00:10:38.694 "get_zone_info": false, 00:10:38.694 "zone_management": false, 00:10:38.694 "zone_append": false, 00:10:38.694 "compare": false, 00:10:38.694 "compare_and_write": false, 00:10:38.694 "abort": false, 00:10:38.694 "seek_hole": false, 00:10:38.694 "seek_data": false, 00:10:38.694 "copy": false, 00:10:38.694 "nvme_iov_md": false 00:10:38.694 }, 00:10:38.694 "memory_domains": [ 00:10:38.694 { 00:10:38.694 "dma_device_id": "system", 00:10:38.694 "dma_device_type": 1 00:10:38.694 }, 00:10:38.694 { 00:10:38.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.694 "dma_device_type": 2 00:10:38.694 }, 00:10:38.694 { 00:10:38.694 "dma_device_id": "system", 00:10:38.694 "dma_device_type": 1 00:10:38.694 }, 00:10:38.694 { 00:10:38.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.694 "dma_device_type": 2 00:10:38.694 }, 00:10:38.694 { 00:10:38.694 "dma_device_id": "system", 00:10:38.694 "dma_device_type": 1 00:10:38.694 }, 00:10:38.694 { 00:10:38.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.694 "dma_device_type": 2 00:10:38.694 }, 00:10:38.694 { 00:10:38.694 "dma_device_id": "system", 00:10:38.694 "dma_device_type": 1 00:10:38.694 }, 00:10:38.694 { 00:10:38.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.694 "dma_device_type": 2 00:10:38.694 } 00:10:38.694 ], 00:10:38.694 "driver_specific": { 00:10:38.694 "raid": { 00:10:38.694 "uuid": "ebf30d5a-7048-4aa5-84ac-a3060173314d", 00:10:38.694 "strip_size_kb": 0, 00:10:38.694 "state": "online", 00:10:38.694 "raid_level": "raid1", 00:10:38.694 "superblock": true, 00:10:38.694 "num_base_bdevs": 4, 00:10:38.694 "num_base_bdevs_discovered": 4, 00:10:38.694 "num_base_bdevs_operational": 4, 00:10:38.694 "base_bdevs_list": [ 00:10:38.694 { 00:10:38.694 "name": "pt1", 00:10:38.694 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:38.694 "is_configured": true, 00:10:38.694 "data_offset": 2048, 00:10:38.694 "data_size": 63488 00:10:38.694 }, 00:10:38.694 { 00:10:38.694 "name": "pt2", 00:10:38.694 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.694 "is_configured": true, 00:10:38.694 "data_offset": 2048, 00:10:38.694 "data_size": 63488 00:10:38.694 }, 00:10:38.694 { 00:10:38.694 "name": "pt3", 00:10:38.694 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:38.694 "is_configured": true, 00:10:38.694 "data_offset": 2048, 00:10:38.694 "data_size": 63488 00:10:38.694 }, 00:10:38.694 { 00:10:38.694 "name": "pt4", 00:10:38.694 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:38.694 "is_configured": true, 00:10:38.694 "data_offset": 2048, 00:10:38.694 "data_size": 63488 00:10:38.694 } 00:10:38.694 ] 00:10:38.694 } 00:10:38.694 } 00:10:38.694 }' 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:38.694 pt2 00:10:38.694 pt3 00:10:38.694 pt4' 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.694 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:38.954 [2024-11-28 16:23:30.579842] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ebf30d5a-7048-4aa5-84ac-a3060173314d '!=' ebf30d5a-7048-4aa5-84ac-a3060173314d ']' 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.954 [2024-11-28 16:23:30.627503] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.954 "name": "raid_bdev1", 00:10:38.954 "uuid": "ebf30d5a-7048-4aa5-84ac-a3060173314d", 00:10:38.954 "strip_size_kb": 0, 00:10:38.954 "state": "online", 00:10:38.954 "raid_level": "raid1", 00:10:38.954 "superblock": true, 00:10:38.954 "num_base_bdevs": 4, 00:10:38.954 "num_base_bdevs_discovered": 3, 00:10:38.954 "num_base_bdevs_operational": 3, 00:10:38.954 "base_bdevs_list": [ 00:10:38.954 { 00:10:38.954 "name": null, 00:10:38.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.954 "is_configured": false, 00:10:38.954 "data_offset": 0, 00:10:38.954 "data_size": 63488 00:10:38.954 }, 00:10:38.954 { 00:10:38.954 "name": "pt2", 00:10:38.954 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.954 "is_configured": true, 00:10:38.954 "data_offset": 2048, 00:10:38.954 "data_size": 63488 00:10:38.954 }, 00:10:38.954 { 00:10:38.954 "name": "pt3", 00:10:38.954 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:38.954 "is_configured": true, 00:10:38.954 "data_offset": 2048, 00:10:38.954 "data_size": 63488 00:10:38.954 }, 00:10:38.954 { 00:10:38.954 "name": "pt4", 00:10:38.954 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:38.954 "is_configured": true, 00:10:38.954 "data_offset": 2048, 00:10:38.954 "data_size": 63488 00:10:38.954 } 00:10:38.954 ] 00:10:38.954 }' 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.954 16:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.523 [2024-11-28 16:23:31.070700] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:39.523 [2024-11-28 16:23:31.070728] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.523 [2024-11-28 16:23:31.070797] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.523 [2024-11-28 16:23:31.070874] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:39.523 [2024-11-28 16:23:31.070885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.523 [2024-11-28 16:23:31.146564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:39.523 [2024-11-28 16:23:31.146640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.523 [2024-11-28 16:23:31.146657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:39.523 [2024-11-28 16:23:31.146668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.523 [2024-11-28 16:23:31.148807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.523 [2024-11-28 16:23:31.148856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:39.523 [2024-11-28 16:23:31.148922] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:39.523 [2024-11-28 16:23:31.148953] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:39.523 pt2 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.523 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.524 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.524 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.524 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.524 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.524 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.524 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.524 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.524 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.524 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.524 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.524 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.524 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.524 "name": "raid_bdev1", 00:10:39.524 "uuid": "ebf30d5a-7048-4aa5-84ac-a3060173314d", 00:10:39.524 "strip_size_kb": 0, 00:10:39.524 "state": "configuring", 00:10:39.524 "raid_level": "raid1", 00:10:39.524 "superblock": true, 00:10:39.524 "num_base_bdevs": 4, 00:10:39.524 "num_base_bdevs_discovered": 1, 00:10:39.524 "num_base_bdevs_operational": 3, 00:10:39.524 "base_bdevs_list": [ 00:10:39.524 { 00:10:39.524 "name": null, 00:10:39.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.524 "is_configured": false, 00:10:39.524 "data_offset": 2048, 00:10:39.524 "data_size": 63488 00:10:39.524 }, 00:10:39.524 { 00:10:39.524 "name": "pt2", 00:10:39.524 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.524 "is_configured": true, 00:10:39.524 "data_offset": 2048, 00:10:39.524 "data_size": 63488 00:10:39.524 }, 00:10:39.524 { 00:10:39.524 "name": null, 00:10:39.524 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:39.524 "is_configured": false, 00:10:39.524 "data_offset": 2048, 00:10:39.524 "data_size": 63488 00:10:39.524 }, 00:10:39.524 { 00:10:39.524 "name": null, 00:10:39.524 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:39.524 "is_configured": false, 00:10:39.524 "data_offset": 2048, 00:10:39.524 "data_size": 63488 00:10:39.524 } 00:10:39.524 ] 00:10:39.524 }' 00:10:39.524 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.524 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.094 [2024-11-28 16:23:31.605839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:40.094 [2024-11-28 16:23:31.605943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.094 [2024-11-28 16:23:31.605978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:10:40.094 [2024-11-28 16:23:31.606011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.094 [2024-11-28 16:23:31.606412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.094 [2024-11-28 16:23:31.606470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:40.094 [2024-11-28 16:23:31.606571] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:40.094 [2024-11-28 16:23:31.606621] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:40.094 pt3 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.094 "name": "raid_bdev1", 00:10:40.094 "uuid": "ebf30d5a-7048-4aa5-84ac-a3060173314d", 00:10:40.094 "strip_size_kb": 0, 00:10:40.094 "state": "configuring", 00:10:40.094 "raid_level": "raid1", 00:10:40.094 "superblock": true, 00:10:40.094 "num_base_bdevs": 4, 00:10:40.094 "num_base_bdevs_discovered": 2, 00:10:40.094 "num_base_bdevs_operational": 3, 00:10:40.094 "base_bdevs_list": [ 00:10:40.094 { 00:10:40.094 "name": null, 00:10:40.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.094 "is_configured": false, 00:10:40.094 "data_offset": 2048, 00:10:40.094 "data_size": 63488 00:10:40.094 }, 00:10:40.094 { 00:10:40.094 "name": "pt2", 00:10:40.094 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:40.094 "is_configured": true, 00:10:40.094 "data_offset": 2048, 00:10:40.094 "data_size": 63488 00:10:40.094 }, 00:10:40.094 { 00:10:40.094 "name": "pt3", 00:10:40.094 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:40.094 "is_configured": true, 00:10:40.094 "data_offset": 2048, 00:10:40.094 "data_size": 63488 00:10:40.094 }, 00:10:40.094 { 00:10:40.094 "name": null, 00:10:40.094 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:40.094 "is_configured": false, 00:10:40.094 "data_offset": 2048, 00:10:40.094 "data_size": 63488 00:10:40.094 } 00:10:40.094 ] 00:10:40.094 }' 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.094 16:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.354 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:40.354 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:40.354 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:10:40.354 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:40.354 16:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.354 16:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.354 [2024-11-28 16:23:32.080963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:40.354 [2024-11-28 16:23:32.081025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.354 [2024-11-28 16:23:32.081045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:10:40.354 [2024-11-28 16:23:32.081055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.354 [2024-11-28 16:23:32.081406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.354 [2024-11-28 16:23:32.081423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:40.354 [2024-11-28 16:23:32.081491] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:40.354 [2024-11-28 16:23:32.081519] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:40.354 [2024-11-28 16:23:32.081618] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:40.354 [2024-11-28 16:23:32.081629] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:40.354 [2024-11-28 16:23:32.081870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:40.354 [2024-11-28 16:23:32.081993] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:40.354 [2024-11-28 16:23:32.082002] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:10:40.354 [2024-11-28 16:23:32.082105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.354 pt4 00:10:40.354 16:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.354 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:40.354 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.354 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.354 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.354 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.354 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.354 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.354 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.354 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.354 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.354 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.354 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.354 16:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.354 16:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.354 16:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.614 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.614 "name": "raid_bdev1", 00:10:40.614 "uuid": "ebf30d5a-7048-4aa5-84ac-a3060173314d", 00:10:40.614 "strip_size_kb": 0, 00:10:40.614 "state": "online", 00:10:40.614 "raid_level": "raid1", 00:10:40.614 "superblock": true, 00:10:40.614 "num_base_bdevs": 4, 00:10:40.614 "num_base_bdevs_discovered": 3, 00:10:40.614 "num_base_bdevs_operational": 3, 00:10:40.614 "base_bdevs_list": [ 00:10:40.614 { 00:10:40.614 "name": null, 00:10:40.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.614 "is_configured": false, 00:10:40.614 "data_offset": 2048, 00:10:40.614 "data_size": 63488 00:10:40.614 }, 00:10:40.614 { 00:10:40.614 "name": "pt2", 00:10:40.614 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:40.614 "is_configured": true, 00:10:40.614 "data_offset": 2048, 00:10:40.614 "data_size": 63488 00:10:40.614 }, 00:10:40.614 { 00:10:40.614 "name": "pt3", 00:10:40.614 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:40.614 "is_configured": true, 00:10:40.614 "data_offset": 2048, 00:10:40.614 "data_size": 63488 00:10:40.614 }, 00:10:40.614 { 00:10:40.614 "name": "pt4", 00:10:40.614 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:40.614 "is_configured": true, 00:10:40.614 "data_offset": 2048, 00:10:40.614 "data_size": 63488 00:10:40.614 } 00:10:40.614 ] 00:10:40.614 }' 00:10:40.614 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.614 16:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.875 [2024-11-28 16:23:32.468389] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:40.875 [2024-11-28 16:23:32.468501] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:40.875 [2024-11-28 16:23:32.468633] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.875 [2024-11-28 16:23:32.468741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:40.875 [2024-11-28 16:23:32.468791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.875 [2024-11-28 16:23:32.540264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:40.875 [2024-11-28 16:23:32.540376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.875 [2024-11-28 16:23:32.540434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:10:40.875 [2024-11-28 16:23:32.540474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.875 [2024-11-28 16:23:32.543107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.875 [2024-11-28 16:23:32.543183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:40.875 [2024-11-28 16:23:32.543291] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:40.875 [2024-11-28 16:23:32.543367] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:40.875 [2024-11-28 16:23:32.543543] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:40.875 [2024-11-28 16:23:32.543603] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:40.875 [2024-11-28 16:23:32.543640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:10:40.875 [2024-11-28 16:23:32.543730] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:40.875 [2024-11-28 16:23:32.543883] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:40.875 pt1 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.875 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.876 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.876 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.876 16:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.876 16:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.876 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.876 16:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.876 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.876 "name": "raid_bdev1", 00:10:40.876 "uuid": "ebf30d5a-7048-4aa5-84ac-a3060173314d", 00:10:40.876 "strip_size_kb": 0, 00:10:40.876 "state": "configuring", 00:10:40.876 "raid_level": "raid1", 00:10:40.876 "superblock": true, 00:10:40.876 "num_base_bdevs": 4, 00:10:40.876 "num_base_bdevs_discovered": 2, 00:10:40.876 "num_base_bdevs_operational": 3, 00:10:40.876 "base_bdevs_list": [ 00:10:40.876 { 00:10:40.876 "name": null, 00:10:40.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.876 "is_configured": false, 00:10:40.876 "data_offset": 2048, 00:10:40.876 "data_size": 63488 00:10:40.876 }, 00:10:40.876 { 00:10:40.876 "name": "pt2", 00:10:40.876 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:40.876 "is_configured": true, 00:10:40.876 "data_offset": 2048, 00:10:40.876 "data_size": 63488 00:10:40.876 }, 00:10:40.876 { 00:10:40.876 "name": "pt3", 00:10:40.876 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:40.876 "is_configured": true, 00:10:40.876 "data_offset": 2048, 00:10:40.876 "data_size": 63488 00:10:40.876 }, 00:10:40.876 { 00:10:40.876 "name": null, 00:10:40.876 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:40.876 "is_configured": false, 00:10:40.876 "data_offset": 2048, 00:10:40.876 "data_size": 63488 00:10:40.876 } 00:10:40.876 ] 00:10:40.876 }' 00:10:40.876 16:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.876 16:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.446 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:41.446 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:41.446 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.446 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.446 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.446 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:41.446 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:41.446 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.446 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.446 [2024-11-28 16:23:33.067409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:41.446 [2024-11-28 16:23:33.067548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.446 [2024-11-28 16:23:33.067589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:10:41.446 [2024-11-28 16:23:33.067629] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.446 [2024-11-28 16:23:33.068190] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.446 [2024-11-28 16:23:33.068260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:41.446 [2024-11-28 16:23:33.068371] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:41.446 [2024-11-28 16:23:33.068428] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:41.446 [2024-11-28 16:23:33.068572] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:10:41.446 [2024-11-28 16:23:33.068614] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:41.446 [2024-11-28 16:23:33.068907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:41.446 [2024-11-28 16:23:33.069044] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:10:41.446 [2024-11-28 16:23:33.069053] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:10:41.446 [2024-11-28 16:23:33.069177] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.446 pt4 00:10:41.447 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.447 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:41.447 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:41.447 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.447 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.447 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.447 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.447 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.447 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.447 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.447 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.447 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.447 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:41.447 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.447 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.447 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.447 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.447 "name": "raid_bdev1", 00:10:41.447 "uuid": "ebf30d5a-7048-4aa5-84ac-a3060173314d", 00:10:41.447 "strip_size_kb": 0, 00:10:41.447 "state": "online", 00:10:41.447 "raid_level": "raid1", 00:10:41.447 "superblock": true, 00:10:41.447 "num_base_bdevs": 4, 00:10:41.447 "num_base_bdevs_discovered": 3, 00:10:41.447 "num_base_bdevs_operational": 3, 00:10:41.447 "base_bdevs_list": [ 00:10:41.447 { 00:10:41.447 "name": null, 00:10:41.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.447 "is_configured": false, 00:10:41.447 "data_offset": 2048, 00:10:41.447 "data_size": 63488 00:10:41.447 }, 00:10:41.447 { 00:10:41.447 "name": "pt2", 00:10:41.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:41.447 "is_configured": true, 00:10:41.447 "data_offset": 2048, 00:10:41.447 "data_size": 63488 00:10:41.447 }, 00:10:41.447 { 00:10:41.447 "name": "pt3", 00:10:41.447 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:41.447 "is_configured": true, 00:10:41.447 "data_offset": 2048, 00:10:41.447 "data_size": 63488 00:10:41.447 }, 00:10:41.447 { 00:10:41.447 "name": "pt4", 00:10:41.447 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:41.447 "is_configured": true, 00:10:41.447 "data_offset": 2048, 00:10:41.447 "data_size": 63488 00:10:41.447 } 00:10:41.447 ] 00:10:41.447 }' 00:10:41.447 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.447 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.017 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:42.017 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.017 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.017 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:42.017 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.017 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:42.017 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:42.017 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:42.017 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.017 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.017 [2024-11-28 16:23:33.554958] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:42.017 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.017 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' ebf30d5a-7048-4aa5-84ac-a3060173314d '!=' ebf30d5a-7048-4aa5-84ac-a3060173314d ']' 00:10:42.017 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85287 00:10:42.017 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 85287 ']' 00:10:42.017 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 85287 00:10:42.017 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:42.017 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:42.017 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85287 00:10:42.017 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:42.017 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:42.017 killing process with pid 85287 00:10:42.017 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85287' 00:10:42.017 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 85287 00:10:42.017 [2024-11-28 16:23:33.622707] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:42.017 [2024-11-28 16:23:33.622796] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.017 [2024-11-28 16:23:33.622889] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:42.017 [2024-11-28 16:23:33.622900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:10:42.017 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 85287 00:10:42.017 [2024-11-28 16:23:33.666153] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.278 16:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:42.278 00:10:42.278 real 0m7.079s 00:10:42.278 user 0m11.911s 00:10:42.278 sys 0m1.520s 00:10:42.278 ************************************ 00:10:42.278 END TEST raid_superblock_test 00:10:42.278 ************************************ 00:10:42.278 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.278 16:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.278 16:23:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:10:42.278 16:23:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:42.278 16:23:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.278 16:23:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.278 ************************************ 00:10:42.278 START TEST raid_read_error_test 00:10:42.278 ************************************ 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9YykS5ki1w 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85758 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85758 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 85758 ']' 00:10:42.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:42.278 16:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.538 [2024-11-28 16:23:34.072658] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:42.538 [2024-11-28 16:23:34.072873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85758 ] 00:10:42.538 [2024-11-28 16:23:34.217902] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.538 [2024-11-28 16:23:34.264872] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.538 [2024-11-28 16:23:34.306737] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.798 [2024-11-28 16:23:34.306888] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.370 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:43.370 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:43.370 16:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.370 16:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:43.370 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.370 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.370 BaseBdev1_malloc 00:10:43.370 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.370 16:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.371 true 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.371 [2024-11-28 16:23:34.924696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:43.371 [2024-11-28 16:23:34.924799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.371 [2024-11-28 16:23:34.924851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:43.371 [2024-11-28 16:23:34.924888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.371 [2024-11-28 16:23:34.926922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.371 [2024-11-28 16:23:34.926989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:43.371 BaseBdev1 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.371 BaseBdev2_malloc 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.371 true 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.371 [2024-11-28 16:23:34.975260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:43.371 [2024-11-28 16:23:34.975354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.371 [2024-11-28 16:23:34.975390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:43.371 [2024-11-28 16:23:34.975418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.371 [2024-11-28 16:23:34.977492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.371 [2024-11-28 16:23:34.977561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:43.371 BaseBdev2 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.371 BaseBdev3_malloc 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.371 16:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.371 true 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.371 [2024-11-28 16:23:35.015816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:43.371 [2024-11-28 16:23:35.015880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.371 [2024-11-28 16:23:35.015916] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:43.371 [2024-11-28 16:23:35.015926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.371 [2024-11-28 16:23:35.017978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.371 [2024-11-28 16:23:35.018014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:43.371 BaseBdev3 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.371 BaseBdev4_malloc 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.371 true 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.371 [2024-11-28 16:23:35.056341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:43.371 [2024-11-28 16:23:35.056388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.371 [2024-11-28 16:23:35.056410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:43.371 [2024-11-28 16:23:35.056419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.371 [2024-11-28 16:23:35.058395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.371 [2024-11-28 16:23:35.058431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:43.371 BaseBdev4 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.371 [2024-11-28 16:23:35.068374] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.371 [2024-11-28 16:23:35.070142] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:43.371 [2024-11-28 16:23:35.070229] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:43.371 [2024-11-28 16:23:35.070282] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:43.371 [2024-11-28 16:23:35.070470] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:43.371 [2024-11-28 16:23:35.070482] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:43.371 [2024-11-28 16:23:35.070724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:43.371 [2024-11-28 16:23:35.070879] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:43.371 [2024-11-28 16:23:35.070892] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:43.371 [2024-11-28 16:23:35.071018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.371 16:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.371 "name": "raid_bdev1", 00:10:43.371 "uuid": "e0ba2908-973c-448a-8bc7-8a91243c1b6b", 00:10:43.371 "strip_size_kb": 0, 00:10:43.371 "state": "online", 00:10:43.371 "raid_level": "raid1", 00:10:43.371 "superblock": true, 00:10:43.371 "num_base_bdevs": 4, 00:10:43.371 "num_base_bdevs_discovered": 4, 00:10:43.371 "num_base_bdevs_operational": 4, 00:10:43.371 "base_bdevs_list": [ 00:10:43.371 { 00:10:43.371 "name": "BaseBdev1", 00:10:43.371 "uuid": "5dd15f8a-de26-5f50-8cff-fa110c1ac437", 00:10:43.371 "is_configured": true, 00:10:43.371 "data_offset": 2048, 00:10:43.372 "data_size": 63488 00:10:43.372 }, 00:10:43.372 { 00:10:43.372 "name": "BaseBdev2", 00:10:43.372 "uuid": "0e942ba2-e8b7-59f9-aaa5-3b1eeb14153e", 00:10:43.372 "is_configured": true, 00:10:43.372 "data_offset": 2048, 00:10:43.372 "data_size": 63488 00:10:43.372 }, 00:10:43.372 { 00:10:43.372 "name": "BaseBdev3", 00:10:43.372 "uuid": "1bea5110-b471-5fe0-9a3a-67a3f520775f", 00:10:43.372 "is_configured": true, 00:10:43.372 "data_offset": 2048, 00:10:43.372 "data_size": 63488 00:10:43.372 }, 00:10:43.372 { 00:10:43.372 "name": "BaseBdev4", 00:10:43.372 "uuid": "a032cfe2-69e9-55b7-ac20-5dc50e46b06a", 00:10:43.372 "is_configured": true, 00:10:43.372 "data_offset": 2048, 00:10:43.372 "data_size": 63488 00:10:43.372 } 00:10:43.372 ] 00:10:43.372 }' 00:10:43.372 16:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.372 16:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.940 16:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:43.940 16:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:43.940 [2024-11-28 16:23:35.611840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.911 "name": "raid_bdev1", 00:10:44.911 "uuid": "e0ba2908-973c-448a-8bc7-8a91243c1b6b", 00:10:44.911 "strip_size_kb": 0, 00:10:44.911 "state": "online", 00:10:44.911 "raid_level": "raid1", 00:10:44.911 "superblock": true, 00:10:44.911 "num_base_bdevs": 4, 00:10:44.911 "num_base_bdevs_discovered": 4, 00:10:44.911 "num_base_bdevs_operational": 4, 00:10:44.911 "base_bdevs_list": [ 00:10:44.911 { 00:10:44.911 "name": "BaseBdev1", 00:10:44.911 "uuid": "5dd15f8a-de26-5f50-8cff-fa110c1ac437", 00:10:44.911 "is_configured": true, 00:10:44.911 "data_offset": 2048, 00:10:44.911 "data_size": 63488 00:10:44.911 }, 00:10:44.911 { 00:10:44.911 "name": "BaseBdev2", 00:10:44.911 "uuid": "0e942ba2-e8b7-59f9-aaa5-3b1eeb14153e", 00:10:44.911 "is_configured": true, 00:10:44.911 "data_offset": 2048, 00:10:44.911 "data_size": 63488 00:10:44.911 }, 00:10:44.911 { 00:10:44.911 "name": "BaseBdev3", 00:10:44.911 "uuid": "1bea5110-b471-5fe0-9a3a-67a3f520775f", 00:10:44.911 "is_configured": true, 00:10:44.911 "data_offset": 2048, 00:10:44.911 "data_size": 63488 00:10:44.911 }, 00:10:44.911 { 00:10:44.911 "name": "BaseBdev4", 00:10:44.911 "uuid": "a032cfe2-69e9-55b7-ac20-5dc50e46b06a", 00:10:44.911 "is_configured": true, 00:10:44.911 "data_offset": 2048, 00:10:44.911 "data_size": 63488 00:10:44.911 } 00:10:44.911 ] 00:10:44.911 }' 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.911 16:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.493 16:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:45.493 16:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.493 16:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.493 [2024-11-28 16:23:37.002399] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:45.493 [2024-11-28 16:23:37.002499] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.493 [2024-11-28 16:23:37.005049] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.493 [2024-11-28 16:23:37.005136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.493 [2024-11-28 16:23:37.005291] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:45.493 [2024-11-28 16:23:37.005346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:45.493 { 00:10:45.493 "results": [ 00:10:45.493 { 00:10:45.493 "job": "raid_bdev1", 00:10:45.493 "core_mask": "0x1", 00:10:45.493 "workload": "randrw", 00:10:45.493 "percentage": 50, 00:10:45.493 "status": "finished", 00:10:45.493 "queue_depth": 1, 00:10:45.493 "io_size": 131072, 00:10:45.493 "runtime": 1.391489, 00:10:45.493 "iops": 11940.446528862247, 00:10:45.493 "mibps": 1492.5558161077809, 00:10:45.493 "io_failed": 0, 00:10:45.493 "io_timeout": 0, 00:10:45.493 "avg_latency_us": 81.26698329888156, 00:10:45.493 "min_latency_us": 22.134497816593885, 00:10:45.493 "max_latency_us": 1387.989519650655 00:10:45.493 } 00:10:45.493 ], 00:10:45.493 "core_count": 1 00:10:45.493 } 00:10:45.493 16:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.493 16:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85758 00:10:45.493 16:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 85758 ']' 00:10:45.493 16:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 85758 00:10:45.493 16:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:45.493 16:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:45.493 16:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85758 00:10:45.493 16:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:45.493 16:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:45.493 killing process with pid 85758 00:10:45.493 16:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85758' 00:10:45.493 16:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 85758 00:10:45.493 [2024-11-28 16:23:37.045792] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:45.493 16:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 85758 00:10:45.493 [2024-11-28 16:23:37.080647] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:45.759 16:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9YykS5ki1w 00:10:45.759 16:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:45.759 16:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:45.759 ************************************ 00:10:45.759 END TEST raid_read_error_test 00:10:45.759 ************************************ 00:10:45.759 16:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:45.759 16:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:45.759 16:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:45.759 16:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:45.759 16:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:45.759 00:10:45.759 real 0m3.355s 00:10:45.759 user 0m4.222s 00:10:45.759 sys 0m0.542s 00:10:45.759 16:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:45.759 16:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.759 16:23:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:10:45.759 16:23:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:45.759 16:23:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:45.759 16:23:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:45.759 ************************************ 00:10:45.759 START TEST raid_write_error_test 00:10:45.759 ************************************ 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qLbpSJRyLX 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85893 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:45.760 16:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85893 00:10:45.761 16:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 85893 ']' 00:10:45.761 16:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.761 16:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:45.761 16:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.761 16:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:45.761 16:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.761 [2024-11-28 16:23:37.507903] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:45.761 [2024-11-28 16:23:37.508052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85893 ] 00:10:46.024 [2024-11-28 16:23:37.655082] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.024 [2024-11-28 16:23:37.698549] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.024 [2024-11-28 16:23:37.740865] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.024 [2024-11-28 16:23:37.740902] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.593 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:46.593 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:46.593 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:46.593 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:46.593 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.593 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.593 BaseBdev1_malloc 00:10:46.593 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.593 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:46.593 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.593 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.593 true 00:10:46.593 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.593 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:46.593 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.593 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.593 [2024-11-28 16:23:38.346736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:46.593 [2024-11-28 16:23:38.346796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.593 [2024-11-28 16:23:38.346816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:46.593 [2024-11-28 16:23:38.346825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.593 [2024-11-28 16:23:38.348910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.594 [2024-11-28 16:23:38.349002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:46.594 BaseBdev1 00:10:46.594 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.594 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:46.594 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:46.594 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.594 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.854 BaseBdev2_malloc 00:10:46.854 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.854 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:46.854 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.854 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.854 true 00:10:46.854 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.854 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:46.854 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.854 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.854 [2024-11-28 16:23:38.396191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:46.854 [2024-11-28 16:23:38.396281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.854 [2024-11-28 16:23:38.396318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:46.854 [2024-11-28 16:23:38.396327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.854 [2024-11-28 16:23:38.398336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.854 [2024-11-28 16:23:38.398371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:46.854 BaseBdev2 00:10:46.854 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.854 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:46.854 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:46.854 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.854 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.854 BaseBdev3_malloc 00:10:46.854 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.854 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:46.854 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.854 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.854 true 00:10:46.854 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.854 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:46.854 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.854 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.854 [2024-11-28 16:23:38.436543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:46.854 [2024-11-28 16:23:38.436590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.854 [2024-11-28 16:23:38.436608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:46.854 [2024-11-28 16:23:38.436617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.854 [2024-11-28 16:23:38.438621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.854 [2024-11-28 16:23:38.438657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:46.855 BaseBdev3 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.855 BaseBdev4_malloc 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.855 true 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.855 [2024-11-28 16:23:38.476882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:46.855 [2024-11-28 16:23:38.476968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.855 [2024-11-28 16:23:38.476992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:46.855 [2024-11-28 16:23:38.477000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.855 [2024-11-28 16:23:38.478975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.855 [2024-11-28 16:23:38.479011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:46.855 BaseBdev4 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.855 [2024-11-28 16:23:38.488907] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:46.855 [2024-11-28 16:23:38.490664] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:46.855 [2024-11-28 16:23:38.490750] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:46.855 [2024-11-28 16:23:38.490802] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:46.855 [2024-11-28 16:23:38.491010] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:46.855 [2024-11-28 16:23:38.491026] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:46.855 [2024-11-28 16:23:38.491286] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:46.855 [2024-11-28 16:23:38.491429] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:46.855 [2024-11-28 16:23:38.491446] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:46.855 [2024-11-28 16:23:38.491557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.855 "name": "raid_bdev1", 00:10:46.855 "uuid": "e652d856-a3fb-4983-a2c6-d6af79fbbb68", 00:10:46.855 "strip_size_kb": 0, 00:10:46.855 "state": "online", 00:10:46.855 "raid_level": "raid1", 00:10:46.855 "superblock": true, 00:10:46.855 "num_base_bdevs": 4, 00:10:46.855 "num_base_bdevs_discovered": 4, 00:10:46.855 "num_base_bdevs_operational": 4, 00:10:46.855 "base_bdevs_list": [ 00:10:46.855 { 00:10:46.855 "name": "BaseBdev1", 00:10:46.855 "uuid": "94fde98c-815a-50bd-95ed-ddaa36ec3cf5", 00:10:46.855 "is_configured": true, 00:10:46.855 "data_offset": 2048, 00:10:46.855 "data_size": 63488 00:10:46.855 }, 00:10:46.855 { 00:10:46.855 "name": "BaseBdev2", 00:10:46.855 "uuid": "80ebd679-aea9-564f-9ea1-a4d3795a100b", 00:10:46.855 "is_configured": true, 00:10:46.855 "data_offset": 2048, 00:10:46.855 "data_size": 63488 00:10:46.855 }, 00:10:46.855 { 00:10:46.855 "name": "BaseBdev3", 00:10:46.855 "uuid": "182a0eae-bc83-5c10-9646-df0cb06b6394", 00:10:46.855 "is_configured": true, 00:10:46.855 "data_offset": 2048, 00:10:46.855 "data_size": 63488 00:10:46.855 }, 00:10:46.855 { 00:10:46.855 "name": "BaseBdev4", 00:10:46.855 "uuid": "f7fac2e3-92fe-5795-86b3-23caf75619a2", 00:10:46.855 "is_configured": true, 00:10:46.855 "data_offset": 2048, 00:10:46.855 "data_size": 63488 00:10:46.855 } 00:10:46.855 ] 00:10:46.855 }' 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.855 16:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.425 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:47.425 16:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:47.425 [2024-11-28 16:23:39.068277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:48.363 16:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:48.363 16:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.363 16:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.363 [2024-11-28 16:23:39.997881] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:48.363 [2024-11-28 16:23:39.997939] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:48.363 [2024-11-28 16:23:39.998174] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:10:48.363 16:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.363 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:48.363 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:48.363 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:48.363 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:48.363 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:48.363 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.363 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.363 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.363 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.363 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.363 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.363 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.363 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.363 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.363 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.363 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.363 16:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.363 16:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.363 16:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.363 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.363 "name": "raid_bdev1", 00:10:48.363 "uuid": "e652d856-a3fb-4983-a2c6-d6af79fbbb68", 00:10:48.363 "strip_size_kb": 0, 00:10:48.363 "state": "online", 00:10:48.363 "raid_level": "raid1", 00:10:48.363 "superblock": true, 00:10:48.363 "num_base_bdevs": 4, 00:10:48.363 "num_base_bdevs_discovered": 3, 00:10:48.363 "num_base_bdevs_operational": 3, 00:10:48.363 "base_bdevs_list": [ 00:10:48.363 { 00:10:48.363 "name": null, 00:10:48.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.363 "is_configured": false, 00:10:48.363 "data_offset": 0, 00:10:48.363 "data_size": 63488 00:10:48.363 }, 00:10:48.363 { 00:10:48.363 "name": "BaseBdev2", 00:10:48.363 "uuid": "80ebd679-aea9-564f-9ea1-a4d3795a100b", 00:10:48.363 "is_configured": true, 00:10:48.363 "data_offset": 2048, 00:10:48.363 "data_size": 63488 00:10:48.363 }, 00:10:48.363 { 00:10:48.363 "name": "BaseBdev3", 00:10:48.363 "uuid": "182a0eae-bc83-5c10-9646-df0cb06b6394", 00:10:48.363 "is_configured": true, 00:10:48.363 "data_offset": 2048, 00:10:48.363 "data_size": 63488 00:10:48.363 }, 00:10:48.363 { 00:10:48.363 "name": "BaseBdev4", 00:10:48.363 "uuid": "f7fac2e3-92fe-5795-86b3-23caf75619a2", 00:10:48.363 "is_configured": true, 00:10:48.363 "data_offset": 2048, 00:10:48.363 "data_size": 63488 00:10:48.363 } 00:10:48.363 ] 00:10:48.363 }' 00:10:48.363 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.363 16:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.932 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:48.932 16:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.932 16:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.932 [2024-11-28 16:23:40.456982] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:48.932 [2024-11-28 16:23:40.457086] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:48.932 [2024-11-28 16:23:40.459636] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:48.932 [2024-11-28 16:23:40.459734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.932 [2024-11-28 16:23:40.459865] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:48.932 [2024-11-28 16:23:40.459928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:48.932 { 00:10:48.932 "results": [ 00:10:48.932 { 00:10:48.932 "job": "raid_bdev1", 00:10:48.932 "core_mask": "0x1", 00:10:48.932 "workload": "randrw", 00:10:48.932 "percentage": 50, 00:10:48.932 "status": "finished", 00:10:48.932 "queue_depth": 1, 00:10:48.932 "io_size": 131072, 00:10:48.932 "runtime": 1.389675, 00:10:48.932 "iops": 12765.574684728444, 00:10:48.932 "mibps": 1595.6968355910556, 00:10:48.932 "io_failed": 0, 00:10:48.932 "io_timeout": 0, 00:10:48.932 "avg_latency_us": 75.80754833278358, 00:10:48.932 "min_latency_us": 22.246288209606988, 00:10:48.932 "max_latency_us": 1538.235807860262 00:10:48.932 } 00:10:48.932 ], 00:10:48.932 "core_count": 1 00:10:48.932 } 00:10:48.932 16:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.932 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85893 00:10:48.932 16:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 85893 ']' 00:10:48.932 16:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 85893 00:10:48.932 16:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:48.932 16:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:48.932 16:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85893 00:10:48.932 16:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:48.932 16:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:48.932 16:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85893' 00:10:48.932 killing process with pid 85893 00:10:48.932 16:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 85893 00:10:48.932 [2024-11-28 16:23:40.499291] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:48.932 16:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 85893 00:10:48.932 [2024-11-28 16:23:40.534085] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:49.192 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qLbpSJRyLX 00:10:49.192 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:49.192 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:49.192 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:49.192 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:49.192 ************************************ 00:10:49.192 END TEST raid_write_error_test 00:10:49.192 ************************************ 00:10:49.192 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:49.192 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:49.192 16:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:49.192 00:10:49.192 real 0m3.386s 00:10:49.192 user 0m4.249s 00:10:49.192 sys 0m0.572s 00:10:49.192 16:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:49.192 16:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.192 16:23:40 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:10:49.192 16:23:40 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:10:49.192 16:23:40 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:10:49.192 16:23:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:49.192 16:23:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:49.192 16:23:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:49.192 ************************************ 00:10:49.192 START TEST raid_rebuild_test 00:10:49.192 ************************************ 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=86020 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 86020 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 86020 ']' 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:49.192 16:23:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.192 [2024-11-28 16:23:40.953090] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:49.192 [2024-11-28 16:23:40.953261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:10:49.192 Zero copy mechanism will not be used. 00:10:49.192 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86020 ] 00:10:49.452 [2024-11-28 16:23:41.116430] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.452 [2024-11-28 16:23:41.161661] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.452 [2024-11-28 16:23:41.204176] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.452 [2024-11-28 16:23:41.204277] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.022 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:50.022 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:10:50.022 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:50.022 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:50.022 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.022 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.282 BaseBdev1_malloc 00:10:50.282 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.282 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:50.282 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.282 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.282 [2024-11-28 16:23:41.802269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:50.282 [2024-11-28 16:23:41.802393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.282 [2024-11-28 16:23:41.802439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:50.282 [2024-11-28 16:23:41.802476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.282 [2024-11-28 16:23:41.804601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.282 [2024-11-28 16:23:41.804675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:50.282 BaseBdev1 00:10:50.282 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.282 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:50.282 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:50.282 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.282 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.282 BaseBdev2_malloc 00:10:50.282 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.282 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:50.282 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.282 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.282 [2024-11-28 16:23:41.846823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:50.282 [2024-11-28 16:23:41.847029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.282 [2024-11-28 16:23:41.847114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:50.282 [2024-11-28 16:23:41.847187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.282 [2024-11-28 16:23:41.851677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.282 [2024-11-28 16:23:41.851758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:50.282 BaseBdev2 00:10:50.282 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.283 spare_malloc 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.283 spare_delay 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.283 [2024-11-28 16:23:41.889517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:50.283 [2024-11-28 16:23:41.889607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.283 [2024-11-28 16:23:41.889644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:50.283 [2024-11-28 16:23:41.889672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.283 [2024-11-28 16:23:41.891771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.283 [2024-11-28 16:23:41.891846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:50.283 spare 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.283 [2024-11-28 16:23:41.901513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.283 [2024-11-28 16:23:41.903278] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.283 [2024-11-28 16:23:41.903361] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:50.283 [2024-11-28 16:23:41.903372] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:50.283 [2024-11-28 16:23:41.903604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:50.283 [2024-11-28 16:23:41.903715] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:50.283 [2024-11-28 16:23:41.903743] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:50.283 [2024-11-28 16:23:41.903881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.283 "name": "raid_bdev1", 00:10:50.283 "uuid": "e135e899-8cf1-4b5d-8f3d-563f4609b1f5", 00:10:50.283 "strip_size_kb": 0, 00:10:50.283 "state": "online", 00:10:50.283 "raid_level": "raid1", 00:10:50.283 "superblock": false, 00:10:50.283 "num_base_bdevs": 2, 00:10:50.283 "num_base_bdevs_discovered": 2, 00:10:50.283 "num_base_bdevs_operational": 2, 00:10:50.283 "base_bdevs_list": [ 00:10:50.283 { 00:10:50.283 "name": "BaseBdev1", 00:10:50.283 "uuid": "8dcecd17-a7ea-5fb7-9240-9d85f64c49ee", 00:10:50.283 "is_configured": true, 00:10:50.283 "data_offset": 0, 00:10:50.283 "data_size": 65536 00:10:50.283 }, 00:10:50.283 { 00:10:50.283 "name": "BaseBdev2", 00:10:50.283 "uuid": "b29f4497-3e10-59f9-ba99-e14860e9d222", 00:10:50.283 "is_configured": true, 00:10:50.283 "data_offset": 0, 00:10:50.283 "data_size": 65536 00:10:50.283 } 00:10:50.283 ] 00:10:50.283 }' 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.283 16:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.543 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:50.543 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:50.543 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.543 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.803 [2024-11-28 16:23:42.317151] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.803 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.803 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:10:50.803 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.803 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:50.803 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.803 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.803 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.803 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:10:50.803 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:10:50.803 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:10:50.803 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:10:50.803 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:10:50.803 16:23:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:50.803 16:23:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:10:50.803 16:23:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:50.803 16:23:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:50.803 16:23:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:50.803 16:23:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:10:50.803 16:23:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:50.803 16:23:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:50.803 16:23:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:10:51.062 [2024-11-28 16:23:42.592381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:51.062 /dev/nbd0 00:10:51.062 16:23:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:51.062 16:23:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:51.062 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:51.063 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:10:51.063 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:51.063 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:51.063 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:51.063 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:10:51.063 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:51.063 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:51.063 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:51.063 1+0 records in 00:10:51.063 1+0 records out 00:10:51.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514915 s, 8.0 MB/s 00:10:51.063 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:51.063 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:10:51.063 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:51.063 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:51.063 16:23:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:10:51.063 16:23:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:51.063 16:23:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:51.063 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:10:51.063 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:10:51.063 16:23:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:10:55.260 65536+0 records in 00:10:55.260 65536+0 records out 00:10:55.260 33554432 bytes (34 MB, 32 MiB) copied, 4.07555 s, 8.2 MB/s 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:55.260 [2024-11-28 16:23:46.953170] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.260 [2024-11-28 16:23:46.965226] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.260 16:23:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.260 16:23:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.260 "name": "raid_bdev1", 00:10:55.260 "uuid": "e135e899-8cf1-4b5d-8f3d-563f4609b1f5", 00:10:55.260 "strip_size_kb": 0, 00:10:55.260 "state": "online", 00:10:55.260 "raid_level": "raid1", 00:10:55.260 "superblock": false, 00:10:55.260 "num_base_bdevs": 2, 00:10:55.260 "num_base_bdevs_discovered": 1, 00:10:55.260 "num_base_bdevs_operational": 1, 00:10:55.260 "base_bdevs_list": [ 00:10:55.260 { 00:10:55.260 "name": null, 00:10:55.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.260 "is_configured": false, 00:10:55.260 "data_offset": 0, 00:10:55.260 "data_size": 65536 00:10:55.260 }, 00:10:55.260 { 00:10:55.260 "name": "BaseBdev2", 00:10:55.260 "uuid": "b29f4497-3e10-59f9-ba99-e14860e9d222", 00:10:55.260 "is_configured": true, 00:10:55.260 "data_offset": 0, 00:10:55.260 "data_size": 65536 00:10:55.260 } 00:10:55.260 ] 00:10:55.260 }' 00:10:55.260 16:23:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.260 16:23:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.831 16:23:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:55.831 16:23:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.831 16:23:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.831 [2024-11-28 16:23:47.412514] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:55.831 [2024-11-28 16:23:47.416788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09a30 00:10:55.831 16:23:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.831 16:23:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:55.831 [2024-11-28 16:23:47.418683] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:56.772 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:56.772 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:56.772 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:56.772 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:56.772 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:56.772 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.772 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.772 16:23:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.772 16:23:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.772 16:23:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.772 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:56.772 "name": "raid_bdev1", 00:10:56.772 "uuid": "e135e899-8cf1-4b5d-8f3d-563f4609b1f5", 00:10:56.772 "strip_size_kb": 0, 00:10:56.772 "state": "online", 00:10:56.772 "raid_level": "raid1", 00:10:56.772 "superblock": false, 00:10:56.772 "num_base_bdevs": 2, 00:10:56.772 "num_base_bdevs_discovered": 2, 00:10:56.772 "num_base_bdevs_operational": 2, 00:10:56.772 "process": { 00:10:56.772 "type": "rebuild", 00:10:56.772 "target": "spare", 00:10:56.772 "progress": { 00:10:56.772 "blocks": 20480, 00:10:56.772 "percent": 31 00:10:56.772 } 00:10:56.772 }, 00:10:56.772 "base_bdevs_list": [ 00:10:56.772 { 00:10:56.772 "name": "spare", 00:10:56.772 "uuid": "398c828c-2fd4-5b92-a36b-612bbbe0349c", 00:10:56.772 "is_configured": true, 00:10:56.772 "data_offset": 0, 00:10:56.772 "data_size": 65536 00:10:56.772 }, 00:10:56.772 { 00:10:56.772 "name": "BaseBdev2", 00:10:56.772 "uuid": "b29f4497-3e10-59f9-ba99-e14860e9d222", 00:10:56.772 "is_configured": true, 00:10:56.772 "data_offset": 0, 00:10:56.772 "data_size": 65536 00:10:56.772 } 00:10:56.772 ] 00:10:56.772 }' 00:10:56.772 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:56.772 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:56.772 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:57.032 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:57.032 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:57.032 16:23:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.032 16:23:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.032 [2024-11-28 16:23:48.563486] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:57.032 [2024-11-28 16:23:48.623187] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:57.032 [2024-11-28 16:23:48.623242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.032 [2024-11-28 16:23:48.623260] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:57.032 [2024-11-28 16:23:48.623267] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:57.032 16:23:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.032 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:57.032 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.032 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.032 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.032 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.032 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:57.032 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.032 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.032 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.032 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.032 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.032 16:23:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.032 16:23:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.032 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.032 16:23:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.032 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.032 "name": "raid_bdev1", 00:10:57.032 "uuid": "e135e899-8cf1-4b5d-8f3d-563f4609b1f5", 00:10:57.032 "strip_size_kb": 0, 00:10:57.032 "state": "online", 00:10:57.032 "raid_level": "raid1", 00:10:57.032 "superblock": false, 00:10:57.032 "num_base_bdevs": 2, 00:10:57.032 "num_base_bdevs_discovered": 1, 00:10:57.032 "num_base_bdevs_operational": 1, 00:10:57.032 "base_bdevs_list": [ 00:10:57.032 { 00:10:57.032 "name": null, 00:10:57.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.032 "is_configured": false, 00:10:57.032 "data_offset": 0, 00:10:57.033 "data_size": 65536 00:10:57.033 }, 00:10:57.033 { 00:10:57.033 "name": "BaseBdev2", 00:10:57.033 "uuid": "b29f4497-3e10-59f9-ba99-e14860e9d222", 00:10:57.033 "is_configured": true, 00:10:57.033 "data_offset": 0, 00:10:57.033 "data_size": 65536 00:10:57.033 } 00:10:57.033 ] 00:10:57.033 }' 00:10:57.033 16:23:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.033 16:23:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.603 16:23:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:57.603 16:23:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:57.603 16:23:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:57.603 16:23:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:57.603 16:23:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:57.603 16:23:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.603 16:23:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.603 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.603 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.603 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.603 16:23:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:57.603 "name": "raid_bdev1", 00:10:57.603 "uuid": "e135e899-8cf1-4b5d-8f3d-563f4609b1f5", 00:10:57.603 "strip_size_kb": 0, 00:10:57.603 "state": "online", 00:10:57.603 "raid_level": "raid1", 00:10:57.603 "superblock": false, 00:10:57.603 "num_base_bdevs": 2, 00:10:57.603 "num_base_bdevs_discovered": 1, 00:10:57.603 "num_base_bdevs_operational": 1, 00:10:57.603 "base_bdevs_list": [ 00:10:57.603 { 00:10:57.603 "name": null, 00:10:57.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.603 "is_configured": false, 00:10:57.603 "data_offset": 0, 00:10:57.603 "data_size": 65536 00:10:57.603 }, 00:10:57.603 { 00:10:57.603 "name": "BaseBdev2", 00:10:57.603 "uuid": "b29f4497-3e10-59f9-ba99-e14860e9d222", 00:10:57.603 "is_configured": true, 00:10:57.603 "data_offset": 0, 00:10:57.603 "data_size": 65536 00:10:57.603 } 00:10:57.603 ] 00:10:57.603 }' 00:10:57.603 16:23:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:57.603 16:23:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:57.603 16:23:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:57.603 16:23:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:57.603 16:23:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:57.603 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.603 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.603 [2024-11-28 16:23:49.222628] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:57.603 [2024-11-28 16:23:49.226688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09b00 00:10:57.603 16:23:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.603 16:23:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:57.603 [2024-11-28 16:23:49.228489] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:58.543 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:58.543 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:58.543 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:58.543 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:58.543 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:58.543 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.543 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.543 16:23:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.544 16:23:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.544 16:23:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.544 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:58.544 "name": "raid_bdev1", 00:10:58.544 "uuid": "e135e899-8cf1-4b5d-8f3d-563f4609b1f5", 00:10:58.544 "strip_size_kb": 0, 00:10:58.544 "state": "online", 00:10:58.544 "raid_level": "raid1", 00:10:58.544 "superblock": false, 00:10:58.544 "num_base_bdevs": 2, 00:10:58.544 "num_base_bdevs_discovered": 2, 00:10:58.544 "num_base_bdevs_operational": 2, 00:10:58.544 "process": { 00:10:58.544 "type": "rebuild", 00:10:58.544 "target": "spare", 00:10:58.544 "progress": { 00:10:58.544 "blocks": 20480, 00:10:58.544 "percent": 31 00:10:58.544 } 00:10:58.544 }, 00:10:58.544 "base_bdevs_list": [ 00:10:58.544 { 00:10:58.544 "name": "spare", 00:10:58.544 "uuid": "398c828c-2fd4-5b92-a36b-612bbbe0349c", 00:10:58.544 "is_configured": true, 00:10:58.544 "data_offset": 0, 00:10:58.544 "data_size": 65536 00:10:58.544 }, 00:10:58.544 { 00:10:58.544 "name": "BaseBdev2", 00:10:58.544 "uuid": "b29f4497-3e10-59f9-ba99-e14860e9d222", 00:10:58.544 "is_configured": true, 00:10:58.544 "data_offset": 0, 00:10:58.544 "data_size": 65536 00:10:58.544 } 00:10:58.544 ] 00:10:58.544 }' 00:10:58.544 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:58.804 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:58.804 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:58.804 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:58.804 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:10:58.804 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:10:58.804 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:58.804 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:10:58.804 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=289 00:10:58.804 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:58.804 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:58.804 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:58.804 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:58.804 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:58.804 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:58.804 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.804 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.804 16:23:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.804 16:23:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.804 16:23:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.804 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:58.804 "name": "raid_bdev1", 00:10:58.804 "uuid": "e135e899-8cf1-4b5d-8f3d-563f4609b1f5", 00:10:58.804 "strip_size_kb": 0, 00:10:58.804 "state": "online", 00:10:58.804 "raid_level": "raid1", 00:10:58.804 "superblock": false, 00:10:58.804 "num_base_bdevs": 2, 00:10:58.804 "num_base_bdevs_discovered": 2, 00:10:58.804 "num_base_bdevs_operational": 2, 00:10:58.804 "process": { 00:10:58.804 "type": "rebuild", 00:10:58.804 "target": "spare", 00:10:58.804 "progress": { 00:10:58.804 "blocks": 22528, 00:10:58.804 "percent": 34 00:10:58.804 } 00:10:58.804 }, 00:10:58.804 "base_bdevs_list": [ 00:10:58.804 { 00:10:58.804 "name": "spare", 00:10:58.804 "uuid": "398c828c-2fd4-5b92-a36b-612bbbe0349c", 00:10:58.804 "is_configured": true, 00:10:58.804 "data_offset": 0, 00:10:58.804 "data_size": 65536 00:10:58.804 }, 00:10:58.804 { 00:10:58.804 "name": "BaseBdev2", 00:10:58.804 "uuid": "b29f4497-3e10-59f9-ba99-e14860e9d222", 00:10:58.804 "is_configured": true, 00:10:58.804 "data_offset": 0, 00:10:58.804 "data_size": 65536 00:10:58.804 } 00:10:58.804 ] 00:10:58.804 }' 00:10:58.804 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:58.804 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:58.805 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:58.805 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:58.805 16:23:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:59.745 16:23:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:59.745 16:23:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:59.745 16:23:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:59.745 16:23:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:59.745 16:23:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:59.745 16:23:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:59.745 16:23:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.745 16:23:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.745 16:23:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.745 16:23:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.745 16:23:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.005 16:23:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:00.005 "name": "raid_bdev1", 00:11:00.005 "uuid": "e135e899-8cf1-4b5d-8f3d-563f4609b1f5", 00:11:00.005 "strip_size_kb": 0, 00:11:00.005 "state": "online", 00:11:00.005 "raid_level": "raid1", 00:11:00.005 "superblock": false, 00:11:00.005 "num_base_bdevs": 2, 00:11:00.005 "num_base_bdevs_discovered": 2, 00:11:00.005 "num_base_bdevs_operational": 2, 00:11:00.005 "process": { 00:11:00.005 "type": "rebuild", 00:11:00.005 "target": "spare", 00:11:00.005 "progress": { 00:11:00.005 "blocks": 45056, 00:11:00.005 "percent": 68 00:11:00.005 } 00:11:00.005 }, 00:11:00.005 "base_bdevs_list": [ 00:11:00.005 { 00:11:00.005 "name": "spare", 00:11:00.005 "uuid": "398c828c-2fd4-5b92-a36b-612bbbe0349c", 00:11:00.005 "is_configured": true, 00:11:00.005 "data_offset": 0, 00:11:00.005 "data_size": 65536 00:11:00.005 }, 00:11:00.005 { 00:11:00.005 "name": "BaseBdev2", 00:11:00.005 "uuid": "b29f4497-3e10-59f9-ba99-e14860e9d222", 00:11:00.005 "is_configured": true, 00:11:00.005 "data_offset": 0, 00:11:00.005 "data_size": 65536 00:11:00.005 } 00:11:00.005 ] 00:11:00.005 }' 00:11:00.005 16:23:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:00.005 16:23:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:00.005 16:23:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:00.005 16:23:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:00.005 16:23:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:00.946 [2024-11-28 16:23:52.439447] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:00.946 [2024-11-28 16:23:52.439590] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:00.946 [2024-11-28 16:23:52.439669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.946 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:00.946 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:00.946 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:00.946 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:00.946 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:00.946 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:00.946 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.946 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.946 16:23:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.946 16:23:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.946 16:23:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.946 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:00.946 "name": "raid_bdev1", 00:11:00.946 "uuid": "e135e899-8cf1-4b5d-8f3d-563f4609b1f5", 00:11:00.946 "strip_size_kb": 0, 00:11:00.946 "state": "online", 00:11:00.946 "raid_level": "raid1", 00:11:00.946 "superblock": false, 00:11:00.946 "num_base_bdevs": 2, 00:11:00.946 "num_base_bdevs_discovered": 2, 00:11:00.946 "num_base_bdevs_operational": 2, 00:11:00.946 "base_bdevs_list": [ 00:11:00.946 { 00:11:00.946 "name": "spare", 00:11:00.946 "uuid": "398c828c-2fd4-5b92-a36b-612bbbe0349c", 00:11:00.946 "is_configured": true, 00:11:00.946 "data_offset": 0, 00:11:00.946 "data_size": 65536 00:11:00.946 }, 00:11:00.946 { 00:11:00.946 "name": "BaseBdev2", 00:11:00.946 "uuid": "b29f4497-3e10-59f9-ba99-e14860e9d222", 00:11:00.946 "is_configured": true, 00:11:00.946 "data_offset": 0, 00:11:00.946 "data_size": 65536 00:11:00.946 } 00:11:00.946 ] 00:11:00.946 }' 00:11:00.946 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:01.209 "name": "raid_bdev1", 00:11:01.209 "uuid": "e135e899-8cf1-4b5d-8f3d-563f4609b1f5", 00:11:01.209 "strip_size_kb": 0, 00:11:01.209 "state": "online", 00:11:01.209 "raid_level": "raid1", 00:11:01.209 "superblock": false, 00:11:01.209 "num_base_bdevs": 2, 00:11:01.209 "num_base_bdevs_discovered": 2, 00:11:01.209 "num_base_bdevs_operational": 2, 00:11:01.209 "base_bdevs_list": [ 00:11:01.209 { 00:11:01.209 "name": "spare", 00:11:01.209 "uuid": "398c828c-2fd4-5b92-a36b-612bbbe0349c", 00:11:01.209 "is_configured": true, 00:11:01.209 "data_offset": 0, 00:11:01.209 "data_size": 65536 00:11:01.209 }, 00:11:01.209 { 00:11:01.209 "name": "BaseBdev2", 00:11:01.209 "uuid": "b29f4497-3e10-59f9-ba99-e14860e9d222", 00:11:01.209 "is_configured": true, 00:11:01.209 "data_offset": 0, 00:11:01.209 "data_size": 65536 00:11:01.209 } 00:11:01.209 ] 00:11:01.209 }' 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.209 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.209 "name": "raid_bdev1", 00:11:01.210 "uuid": "e135e899-8cf1-4b5d-8f3d-563f4609b1f5", 00:11:01.210 "strip_size_kb": 0, 00:11:01.210 "state": "online", 00:11:01.210 "raid_level": "raid1", 00:11:01.210 "superblock": false, 00:11:01.210 "num_base_bdevs": 2, 00:11:01.210 "num_base_bdevs_discovered": 2, 00:11:01.210 "num_base_bdevs_operational": 2, 00:11:01.210 "base_bdevs_list": [ 00:11:01.210 { 00:11:01.210 "name": "spare", 00:11:01.210 "uuid": "398c828c-2fd4-5b92-a36b-612bbbe0349c", 00:11:01.210 "is_configured": true, 00:11:01.210 "data_offset": 0, 00:11:01.210 "data_size": 65536 00:11:01.210 }, 00:11:01.210 { 00:11:01.210 "name": "BaseBdev2", 00:11:01.210 "uuid": "b29f4497-3e10-59f9-ba99-e14860e9d222", 00:11:01.210 "is_configured": true, 00:11:01.210 "data_offset": 0, 00:11:01.210 "data_size": 65536 00:11:01.210 } 00:11:01.210 ] 00:11:01.210 }' 00:11:01.210 16:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.210 16:23:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.801 16:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:01.801 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.801 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.801 [2024-11-28 16:23:53.330132] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:01.801 [2024-11-28 16:23:53.330207] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:01.801 [2024-11-28 16:23:53.330312] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.801 [2024-11-28 16:23:53.330398] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.801 [2024-11-28 16:23:53.330444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:01.801 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.801 16:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.801 16:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:01.801 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.801 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.801 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.801 16:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:01.801 16:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:01.801 16:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:01.801 16:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:01.801 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:01.801 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:01.801 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:01.801 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:01.801 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:01.801 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:01.801 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:01.801 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:01.801 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:01.801 /dev/nbd0 00:11:02.060 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:02.060 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:02.060 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:02.060 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:02.060 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:02.060 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:02.060 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:02.060 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:02.060 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:02.060 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:02.060 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:02.060 1+0 records in 00:11:02.060 1+0 records out 00:11:02.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519487 s, 7.9 MB/s 00:11:02.060 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:02.060 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:02.060 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:02.060 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:02.060 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:02.060 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:02.060 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:02.060 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:02.060 /dev/nbd1 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:02.320 1+0 records in 00:11:02.320 1+0 records out 00:11:02.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256748 s, 16.0 MB/s 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:02.320 16:23:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:02.580 16:23:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:02.580 16:23:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:02.580 16:23:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:02.580 16:23:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:02.580 16:23:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:02.580 16:23:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:02.581 16:23:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:02.581 16:23:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:02.581 16:23:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:02.581 16:23:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:02.841 16:23:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:02.841 16:23:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:02.841 16:23:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:02.841 16:23:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:02.841 16:23:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:02.841 16:23:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:02.841 16:23:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:02.841 16:23:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:02.841 16:23:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:02.841 16:23:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 86020 00:11:02.841 16:23:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 86020 ']' 00:11:02.841 16:23:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 86020 00:11:02.841 16:23:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:11:02.841 16:23:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:02.841 16:23:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86020 00:11:02.841 16:23:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:02.841 killing process with pid 86020 00:11:02.841 Received shutdown signal, test time was about 60.000000 seconds 00:11:02.841 00:11:02.841 Latency(us) 00:11:02.841 [2024-11-28T16:23:54.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:02.841 [2024-11-28T16:23:54.612Z] =================================================================================================================== 00:11:02.841 [2024-11-28T16:23:54.612Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:02.841 16:23:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:02.841 16:23:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86020' 00:11:02.841 16:23:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 86020 00:11:02.841 [2024-11-28 16:23:54.429310] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:02.841 16:23:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 86020 00:11:02.841 [2024-11-28 16:23:54.460568] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.102 ************************************ 00:11:03.102 END TEST raid_rebuild_test 00:11:03.102 ************************************ 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:03.102 00:11:03.102 real 0m13.839s 00:11:03.102 user 0m15.848s 00:11:03.102 sys 0m3.027s 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.102 16:23:54 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:11:03.102 16:23:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:03.102 16:23:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.102 16:23:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.102 ************************************ 00:11:03.102 START TEST raid_rebuild_test_sb 00:11:03.102 ************************************ 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86426 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86426 00:11:03.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 86426 ']' 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:03.102 16:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.362 [2024-11-28 16:23:54.877481] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:03.362 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:03.362 Zero copy mechanism will not be used. 00:11:03.362 [2024-11-28 16:23:54.877703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86426 ] 00:11:03.362 [2024-11-28 16:23:55.029184] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.362 [2024-11-28 16:23:55.073721] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.362 [2024-11-28 16:23:55.116279] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.362 [2024-11-28 16:23:55.116323] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.932 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:03.932 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:03.932 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:03.932 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:03.932 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.932 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.191 BaseBdev1_malloc 00:11:04.191 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.191 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:04.191 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.192 [2024-11-28 16:23:55.714774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:04.192 [2024-11-28 16:23:55.714841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.192 [2024-11-28 16:23:55.714867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:04.192 [2024-11-28 16:23:55.714881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.192 [2024-11-28 16:23:55.716913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.192 [2024-11-28 16:23:55.716945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:04.192 BaseBdev1 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.192 BaseBdev2_malloc 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.192 [2024-11-28 16:23:55.751157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:04.192 [2024-11-28 16:23:55.751214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.192 [2024-11-28 16:23:55.751238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:04.192 [2024-11-28 16:23:55.751248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.192 [2024-11-28 16:23:55.753567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.192 [2024-11-28 16:23:55.753603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:04.192 BaseBdev2 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.192 spare_malloc 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.192 spare_delay 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.192 [2024-11-28 16:23:55.791470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:04.192 [2024-11-28 16:23:55.791517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.192 [2024-11-28 16:23:55.791536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:04.192 [2024-11-28 16:23:55.791544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.192 [2024-11-28 16:23:55.793538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.192 [2024-11-28 16:23:55.793571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:04.192 spare 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.192 [2024-11-28 16:23:55.803489] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.192 [2024-11-28 16:23:55.805288] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.192 [2024-11-28 16:23:55.805434] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:04.192 [2024-11-28 16:23:55.805451] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:04.192 [2024-11-28 16:23:55.805678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:04.192 [2024-11-28 16:23:55.805858] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:04.192 [2024-11-28 16:23:55.805878] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:04.192 [2024-11-28 16:23:55.805983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.192 "name": "raid_bdev1", 00:11:04.192 "uuid": "20cc3ba4-2d16-4705-9e1e-8e455955d40e", 00:11:04.192 "strip_size_kb": 0, 00:11:04.192 "state": "online", 00:11:04.192 "raid_level": "raid1", 00:11:04.192 "superblock": true, 00:11:04.192 "num_base_bdevs": 2, 00:11:04.192 "num_base_bdevs_discovered": 2, 00:11:04.192 "num_base_bdevs_operational": 2, 00:11:04.192 "base_bdevs_list": [ 00:11:04.192 { 00:11:04.192 "name": "BaseBdev1", 00:11:04.192 "uuid": "379de509-4581-5636-bfa6-acee7e3434fe", 00:11:04.192 "is_configured": true, 00:11:04.192 "data_offset": 2048, 00:11:04.192 "data_size": 63488 00:11:04.192 }, 00:11:04.192 { 00:11:04.192 "name": "BaseBdev2", 00:11:04.192 "uuid": "484a4084-b323-5f9f-aac2-e75e71674c6c", 00:11:04.192 "is_configured": true, 00:11:04.192 "data_offset": 2048, 00:11:04.192 "data_size": 63488 00:11:04.192 } 00:11:04.192 ] 00:11:04.192 }' 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.192 16:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:04.762 [2024-11-28 16:23:56.258959] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:04.762 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:04.762 [2024-11-28 16:23:56.494374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:04.762 /dev/nbd0 00:11:05.022 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:05.022 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:05.022 16:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:05.022 16:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:05.022 16:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:05.022 16:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:05.022 16:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:05.022 16:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:05.022 16:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:05.022 16:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:05.022 16:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:05.022 1+0 records in 00:11:05.022 1+0 records out 00:11:05.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428175 s, 9.6 MB/s 00:11:05.022 16:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:05.022 16:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:05.022 16:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:05.022 16:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:05.022 16:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:05.022 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:05.022 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:05.022 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:05.022 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:05.022 16:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:08.318 63488+0 records in 00:11:08.318 63488+0 records out 00:11:08.318 32505856 bytes (33 MB, 31 MiB) copied, 3.44698 s, 9.4 MB/s 00:11:08.318 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:08.318 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:08.318 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:08.318 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:08.318 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:08.318 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:08.318 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:08.578 [2024-11-28 16:24:00.241055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.578 [2024-11-28 16:24:00.253128] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.578 "name": "raid_bdev1", 00:11:08.578 "uuid": "20cc3ba4-2d16-4705-9e1e-8e455955d40e", 00:11:08.578 "strip_size_kb": 0, 00:11:08.578 "state": "online", 00:11:08.578 "raid_level": "raid1", 00:11:08.578 "superblock": true, 00:11:08.578 "num_base_bdevs": 2, 00:11:08.578 "num_base_bdevs_discovered": 1, 00:11:08.578 "num_base_bdevs_operational": 1, 00:11:08.578 "base_bdevs_list": [ 00:11:08.578 { 00:11:08.578 "name": null, 00:11:08.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.578 "is_configured": false, 00:11:08.578 "data_offset": 0, 00:11:08.578 "data_size": 63488 00:11:08.578 }, 00:11:08.578 { 00:11:08.578 "name": "BaseBdev2", 00:11:08.578 "uuid": "484a4084-b323-5f9f-aac2-e75e71674c6c", 00:11:08.578 "is_configured": true, 00:11:08.578 "data_offset": 2048, 00:11:08.578 "data_size": 63488 00:11:08.578 } 00:11:08.578 ] 00:11:08.578 }' 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.578 16:24:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.147 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:09.147 16:24:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.147 16:24:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.147 [2024-11-28 16:24:00.692395] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:09.147 [2024-11-28 16:24:00.696503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:11:09.147 16:24:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.147 16:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:09.147 [2024-11-28 16:24:00.698316] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:10.086 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:10.086 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:10.086 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:10.086 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:10.086 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:10.086 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.086 16:24:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.086 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.086 16:24:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.086 16:24:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.086 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:10.086 "name": "raid_bdev1", 00:11:10.086 "uuid": "20cc3ba4-2d16-4705-9e1e-8e455955d40e", 00:11:10.086 "strip_size_kb": 0, 00:11:10.086 "state": "online", 00:11:10.086 "raid_level": "raid1", 00:11:10.086 "superblock": true, 00:11:10.086 "num_base_bdevs": 2, 00:11:10.086 "num_base_bdevs_discovered": 2, 00:11:10.086 "num_base_bdevs_operational": 2, 00:11:10.086 "process": { 00:11:10.086 "type": "rebuild", 00:11:10.086 "target": "spare", 00:11:10.086 "progress": { 00:11:10.086 "blocks": 20480, 00:11:10.086 "percent": 32 00:11:10.086 } 00:11:10.086 }, 00:11:10.086 "base_bdevs_list": [ 00:11:10.086 { 00:11:10.086 "name": "spare", 00:11:10.086 "uuid": "7837677e-3c34-5690-8923-d94844018044", 00:11:10.086 "is_configured": true, 00:11:10.086 "data_offset": 2048, 00:11:10.086 "data_size": 63488 00:11:10.086 }, 00:11:10.086 { 00:11:10.086 "name": "BaseBdev2", 00:11:10.086 "uuid": "484a4084-b323-5f9f-aac2-e75e71674c6c", 00:11:10.086 "is_configured": true, 00:11:10.086 "data_offset": 2048, 00:11:10.086 "data_size": 63488 00:11:10.086 } 00:11:10.086 ] 00:11:10.086 }' 00:11:10.086 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:10.086 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:10.086 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:10.086 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:10.086 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:10.086 16:24:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.086 16:24:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.086 [2024-11-28 16:24:01.831015] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:10.346 [2024-11-28 16:24:01.902778] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:10.346 [2024-11-28 16:24:01.902871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.346 [2024-11-28 16:24:01.902891] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:10.346 [2024-11-28 16:24:01.902899] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:10.346 16:24:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.346 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:10.346 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.346 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.346 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.346 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.346 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:10.346 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.346 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.346 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.346 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.346 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.346 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.346 16:24:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.346 16:24:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.346 16:24:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.346 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.346 "name": "raid_bdev1", 00:11:10.346 "uuid": "20cc3ba4-2d16-4705-9e1e-8e455955d40e", 00:11:10.346 "strip_size_kb": 0, 00:11:10.346 "state": "online", 00:11:10.346 "raid_level": "raid1", 00:11:10.346 "superblock": true, 00:11:10.346 "num_base_bdevs": 2, 00:11:10.346 "num_base_bdevs_discovered": 1, 00:11:10.346 "num_base_bdevs_operational": 1, 00:11:10.346 "base_bdevs_list": [ 00:11:10.346 { 00:11:10.346 "name": null, 00:11:10.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.346 "is_configured": false, 00:11:10.346 "data_offset": 0, 00:11:10.346 "data_size": 63488 00:11:10.346 }, 00:11:10.346 { 00:11:10.346 "name": "BaseBdev2", 00:11:10.346 "uuid": "484a4084-b323-5f9f-aac2-e75e71674c6c", 00:11:10.346 "is_configured": true, 00:11:10.346 "data_offset": 2048, 00:11:10.346 "data_size": 63488 00:11:10.346 } 00:11:10.346 ] 00:11:10.346 }' 00:11:10.346 16:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.346 16:24:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.606 16:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:10.606 16:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:10.606 16:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:10.606 16:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:10.606 16:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:10.606 16:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.606 16:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.606 16:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.606 16:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.606 16:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.865 16:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:10.865 "name": "raid_bdev1", 00:11:10.865 "uuid": "20cc3ba4-2d16-4705-9e1e-8e455955d40e", 00:11:10.865 "strip_size_kb": 0, 00:11:10.865 "state": "online", 00:11:10.865 "raid_level": "raid1", 00:11:10.865 "superblock": true, 00:11:10.865 "num_base_bdevs": 2, 00:11:10.865 "num_base_bdevs_discovered": 1, 00:11:10.865 "num_base_bdevs_operational": 1, 00:11:10.865 "base_bdevs_list": [ 00:11:10.865 { 00:11:10.865 "name": null, 00:11:10.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.865 "is_configured": false, 00:11:10.865 "data_offset": 0, 00:11:10.865 "data_size": 63488 00:11:10.865 }, 00:11:10.865 { 00:11:10.866 "name": "BaseBdev2", 00:11:10.866 "uuid": "484a4084-b323-5f9f-aac2-e75e71674c6c", 00:11:10.866 "is_configured": true, 00:11:10.866 "data_offset": 2048, 00:11:10.866 "data_size": 63488 00:11:10.866 } 00:11:10.866 ] 00:11:10.866 }' 00:11:10.866 16:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:10.866 16:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:10.866 16:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:10.866 16:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:10.866 16:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:10.866 16:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.866 16:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.866 [2024-11-28 16:24:02.474466] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:10.866 [2024-11-28 16:24:02.478045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3290 00:11:10.866 [2024-11-28 16:24:02.479854] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:10.866 16:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.866 16:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:11.806 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:11.806 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:11.806 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:11.806 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:11.806 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:11.806 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.806 16:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.806 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.806 16:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.806 16:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.806 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:11.806 "name": "raid_bdev1", 00:11:11.806 "uuid": "20cc3ba4-2d16-4705-9e1e-8e455955d40e", 00:11:11.806 "strip_size_kb": 0, 00:11:11.806 "state": "online", 00:11:11.806 "raid_level": "raid1", 00:11:11.806 "superblock": true, 00:11:11.806 "num_base_bdevs": 2, 00:11:11.806 "num_base_bdevs_discovered": 2, 00:11:11.806 "num_base_bdevs_operational": 2, 00:11:11.806 "process": { 00:11:11.806 "type": "rebuild", 00:11:11.806 "target": "spare", 00:11:11.806 "progress": { 00:11:11.806 "blocks": 20480, 00:11:11.806 "percent": 32 00:11:11.806 } 00:11:11.806 }, 00:11:11.806 "base_bdevs_list": [ 00:11:11.806 { 00:11:11.806 "name": "spare", 00:11:11.806 "uuid": "7837677e-3c34-5690-8923-d94844018044", 00:11:11.806 "is_configured": true, 00:11:11.806 "data_offset": 2048, 00:11:11.806 "data_size": 63488 00:11:11.806 }, 00:11:11.806 { 00:11:11.806 "name": "BaseBdev2", 00:11:11.806 "uuid": "484a4084-b323-5f9f-aac2-e75e71674c6c", 00:11:11.806 "is_configured": true, 00:11:11.806 "data_offset": 2048, 00:11:11.806 "data_size": 63488 00:11:11.806 } 00:11:11.806 ] 00:11:11.806 }' 00:11:11.806 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:11.806 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:11.806 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:12.066 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:12.066 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:12.066 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:12.066 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:12.066 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:12.066 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:12.066 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:12.066 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=302 00:11:12.066 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:12.066 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:12.066 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:12.066 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:12.066 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:12.066 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:12.066 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.066 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.066 16:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.066 16:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.066 16:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.066 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:12.066 "name": "raid_bdev1", 00:11:12.066 "uuid": "20cc3ba4-2d16-4705-9e1e-8e455955d40e", 00:11:12.066 "strip_size_kb": 0, 00:11:12.066 "state": "online", 00:11:12.066 "raid_level": "raid1", 00:11:12.066 "superblock": true, 00:11:12.066 "num_base_bdevs": 2, 00:11:12.066 "num_base_bdevs_discovered": 2, 00:11:12.066 "num_base_bdevs_operational": 2, 00:11:12.066 "process": { 00:11:12.066 "type": "rebuild", 00:11:12.066 "target": "spare", 00:11:12.066 "progress": { 00:11:12.066 "blocks": 22528, 00:11:12.066 "percent": 35 00:11:12.066 } 00:11:12.066 }, 00:11:12.066 "base_bdevs_list": [ 00:11:12.066 { 00:11:12.066 "name": "spare", 00:11:12.066 "uuid": "7837677e-3c34-5690-8923-d94844018044", 00:11:12.066 "is_configured": true, 00:11:12.066 "data_offset": 2048, 00:11:12.066 "data_size": 63488 00:11:12.066 }, 00:11:12.067 { 00:11:12.067 "name": "BaseBdev2", 00:11:12.067 "uuid": "484a4084-b323-5f9f-aac2-e75e71674c6c", 00:11:12.067 "is_configured": true, 00:11:12.067 "data_offset": 2048, 00:11:12.067 "data_size": 63488 00:11:12.067 } 00:11:12.067 ] 00:11:12.067 }' 00:11:12.067 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:12.067 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:12.067 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:12.067 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:12.067 16:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:13.007 16:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:13.008 16:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:13.008 16:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:13.008 16:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:13.008 16:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:13.008 16:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:13.008 16:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.008 16:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.008 16:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.008 16:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.268 16:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.268 16:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:13.268 "name": "raid_bdev1", 00:11:13.268 "uuid": "20cc3ba4-2d16-4705-9e1e-8e455955d40e", 00:11:13.268 "strip_size_kb": 0, 00:11:13.268 "state": "online", 00:11:13.268 "raid_level": "raid1", 00:11:13.268 "superblock": true, 00:11:13.268 "num_base_bdevs": 2, 00:11:13.268 "num_base_bdevs_discovered": 2, 00:11:13.268 "num_base_bdevs_operational": 2, 00:11:13.268 "process": { 00:11:13.268 "type": "rebuild", 00:11:13.268 "target": "spare", 00:11:13.268 "progress": { 00:11:13.268 "blocks": 45056, 00:11:13.268 "percent": 70 00:11:13.268 } 00:11:13.268 }, 00:11:13.268 "base_bdevs_list": [ 00:11:13.268 { 00:11:13.268 "name": "spare", 00:11:13.268 "uuid": "7837677e-3c34-5690-8923-d94844018044", 00:11:13.268 "is_configured": true, 00:11:13.268 "data_offset": 2048, 00:11:13.268 "data_size": 63488 00:11:13.268 }, 00:11:13.268 { 00:11:13.268 "name": "BaseBdev2", 00:11:13.268 "uuid": "484a4084-b323-5f9f-aac2-e75e71674c6c", 00:11:13.268 "is_configured": true, 00:11:13.268 "data_offset": 2048, 00:11:13.268 "data_size": 63488 00:11:13.268 } 00:11:13.268 ] 00:11:13.268 }' 00:11:13.268 16:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:13.268 16:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:13.268 16:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:13.268 16:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:13.268 16:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:13.837 [2024-11-28 16:24:05.590463] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:13.837 [2024-11-28 16:24:05.590549] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:13.837 [2024-11-28 16:24:05.590653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.408 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:14.408 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:14.408 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:14.408 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:14.408 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:14.408 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:14.408 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.408 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.408 16:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.408 16:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.408 16:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.408 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:14.408 "name": "raid_bdev1", 00:11:14.408 "uuid": "20cc3ba4-2d16-4705-9e1e-8e455955d40e", 00:11:14.408 "strip_size_kb": 0, 00:11:14.408 "state": "online", 00:11:14.408 "raid_level": "raid1", 00:11:14.408 "superblock": true, 00:11:14.408 "num_base_bdevs": 2, 00:11:14.408 "num_base_bdevs_discovered": 2, 00:11:14.408 "num_base_bdevs_operational": 2, 00:11:14.408 "base_bdevs_list": [ 00:11:14.408 { 00:11:14.408 "name": "spare", 00:11:14.408 "uuid": "7837677e-3c34-5690-8923-d94844018044", 00:11:14.408 "is_configured": true, 00:11:14.408 "data_offset": 2048, 00:11:14.408 "data_size": 63488 00:11:14.408 }, 00:11:14.408 { 00:11:14.408 "name": "BaseBdev2", 00:11:14.408 "uuid": "484a4084-b323-5f9f-aac2-e75e71674c6c", 00:11:14.408 "is_configured": true, 00:11:14.408 "data_offset": 2048, 00:11:14.408 "data_size": 63488 00:11:14.408 } 00:11:14.408 ] 00:11:14.408 }' 00:11:14.408 16:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:14.408 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:14.408 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:14.408 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:14.408 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:11:14.408 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:14.408 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:14.408 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:14.408 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:14.408 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:14.408 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.408 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.408 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.408 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.408 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.408 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:14.408 "name": "raid_bdev1", 00:11:14.408 "uuid": "20cc3ba4-2d16-4705-9e1e-8e455955d40e", 00:11:14.408 "strip_size_kb": 0, 00:11:14.408 "state": "online", 00:11:14.408 "raid_level": "raid1", 00:11:14.408 "superblock": true, 00:11:14.408 "num_base_bdevs": 2, 00:11:14.408 "num_base_bdevs_discovered": 2, 00:11:14.408 "num_base_bdevs_operational": 2, 00:11:14.408 "base_bdevs_list": [ 00:11:14.408 { 00:11:14.408 "name": "spare", 00:11:14.408 "uuid": "7837677e-3c34-5690-8923-d94844018044", 00:11:14.408 "is_configured": true, 00:11:14.408 "data_offset": 2048, 00:11:14.408 "data_size": 63488 00:11:14.408 }, 00:11:14.408 { 00:11:14.408 "name": "BaseBdev2", 00:11:14.408 "uuid": "484a4084-b323-5f9f-aac2-e75e71674c6c", 00:11:14.408 "is_configured": true, 00:11:14.408 "data_offset": 2048, 00:11:14.408 "data_size": 63488 00:11:14.408 } 00:11:14.408 ] 00:11:14.408 }' 00:11:14.408 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:14.409 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:14.409 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:14.669 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:14.669 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:14.669 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.669 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.669 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.669 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.669 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:14.669 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.669 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.669 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.669 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.669 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.669 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.669 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.669 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.669 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.669 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.669 "name": "raid_bdev1", 00:11:14.669 "uuid": "20cc3ba4-2d16-4705-9e1e-8e455955d40e", 00:11:14.669 "strip_size_kb": 0, 00:11:14.669 "state": "online", 00:11:14.669 "raid_level": "raid1", 00:11:14.669 "superblock": true, 00:11:14.669 "num_base_bdevs": 2, 00:11:14.669 "num_base_bdevs_discovered": 2, 00:11:14.669 "num_base_bdevs_operational": 2, 00:11:14.669 "base_bdevs_list": [ 00:11:14.669 { 00:11:14.669 "name": "spare", 00:11:14.669 "uuid": "7837677e-3c34-5690-8923-d94844018044", 00:11:14.669 "is_configured": true, 00:11:14.669 "data_offset": 2048, 00:11:14.669 "data_size": 63488 00:11:14.669 }, 00:11:14.669 { 00:11:14.669 "name": "BaseBdev2", 00:11:14.669 "uuid": "484a4084-b323-5f9f-aac2-e75e71674c6c", 00:11:14.669 "is_configured": true, 00:11:14.669 "data_offset": 2048, 00:11:14.669 "data_size": 63488 00:11:14.669 } 00:11:14.669 ] 00:11:14.669 }' 00:11:14.669 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.669 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.930 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:14.930 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.930 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.930 [2024-11-28 16:24:06.624944] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:14.930 [2024-11-28 16:24:06.624982] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.930 [2024-11-28 16:24:06.625070] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.930 [2024-11-28 16:24:06.625147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.930 [2024-11-28 16:24:06.625164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:14.930 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.930 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.930 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.930 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.930 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:11:14.930 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.930 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:14.930 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:14.930 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:14.930 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:14.930 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:14.930 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:14.930 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:14.930 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:14.930 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:14.930 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:14.930 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:14.930 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:14.930 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:15.191 /dev/nbd0 00:11:15.191 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:15.191 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:15.191 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:15.191 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:15.191 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:15.191 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:15.191 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:15.191 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:15.191 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:15.191 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:15.191 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:15.191 1+0 records in 00:11:15.191 1+0 records out 00:11:15.191 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360453 s, 11.4 MB/s 00:11:15.191 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.191 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:15.191 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.191 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:15.191 16:24:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:15.191 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:15.191 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:15.191 16:24:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:15.452 /dev/nbd1 00:11:15.452 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:15.452 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:15.452 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:15.452 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:15.452 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:15.452 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:15.452 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:15.452 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:15.452 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:15.452 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:15.452 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:15.452 1+0 records in 00:11:15.452 1+0 records out 00:11:15.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040599 s, 10.1 MB/s 00:11:15.452 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.452 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:15.452 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.452 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:15.452 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:15.452 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:15.452 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:15.452 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:15.712 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:15.712 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:15.712 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:15.712 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:15.712 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:15.712 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.712 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:15.712 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:15.712 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:15.712 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:15.712 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:15.712 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:15.712 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:15.712 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:15.712 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:15.712 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.712 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:15.971 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:15.972 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:15.972 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:15.972 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:15.972 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:15.972 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:15.972 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:15.972 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:15.972 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:15.972 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:15.972 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.972 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.972 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.972 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:15.972 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.972 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.972 [2024-11-28 16:24:07.693398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:15.972 [2024-11-28 16:24:07.693457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.972 [2024-11-28 16:24:07.693477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:15.972 [2024-11-28 16:24:07.693490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.972 [2024-11-28 16:24:07.695535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.972 [2024-11-28 16:24:07.695576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:15.972 [2024-11-28 16:24:07.695652] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:15.972 [2024-11-28 16:24:07.695713] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:15.972 [2024-11-28 16:24:07.695859] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.972 spare 00:11:15.972 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.972 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:15.972 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.972 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.232 [2024-11-28 16:24:07.795772] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:11:16.232 [2024-11-28 16:24:07.795801] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:16.232 [2024-11-28 16:24:07.796046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1940 00:11:16.232 [2024-11-28 16:24:07.796192] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:11:16.232 [2024-11-28 16:24:07.796211] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:11:16.232 [2024-11-28 16:24:07.796324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.232 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.232 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:16.232 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.232 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.232 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.232 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.232 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:16.232 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.232 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.232 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.232 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.232 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.232 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.232 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.232 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.232 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.232 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.232 "name": "raid_bdev1", 00:11:16.232 "uuid": "20cc3ba4-2d16-4705-9e1e-8e455955d40e", 00:11:16.232 "strip_size_kb": 0, 00:11:16.232 "state": "online", 00:11:16.232 "raid_level": "raid1", 00:11:16.232 "superblock": true, 00:11:16.232 "num_base_bdevs": 2, 00:11:16.232 "num_base_bdevs_discovered": 2, 00:11:16.232 "num_base_bdevs_operational": 2, 00:11:16.232 "base_bdevs_list": [ 00:11:16.232 { 00:11:16.232 "name": "spare", 00:11:16.232 "uuid": "7837677e-3c34-5690-8923-d94844018044", 00:11:16.232 "is_configured": true, 00:11:16.232 "data_offset": 2048, 00:11:16.232 "data_size": 63488 00:11:16.232 }, 00:11:16.232 { 00:11:16.232 "name": "BaseBdev2", 00:11:16.232 "uuid": "484a4084-b323-5f9f-aac2-e75e71674c6c", 00:11:16.232 "is_configured": true, 00:11:16.232 "data_offset": 2048, 00:11:16.232 "data_size": 63488 00:11:16.232 } 00:11:16.232 ] 00:11:16.232 }' 00:11:16.232 16:24:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.232 16:24:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:16.802 "name": "raid_bdev1", 00:11:16.802 "uuid": "20cc3ba4-2d16-4705-9e1e-8e455955d40e", 00:11:16.802 "strip_size_kb": 0, 00:11:16.802 "state": "online", 00:11:16.802 "raid_level": "raid1", 00:11:16.802 "superblock": true, 00:11:16.802 "num_base_bdevs": 2, 00:11:16.802 "num_base_bdevs_discovered": 2, 00:11:16.802 "num_base_bdevs_operational": 2, 00:11:16.802 "base_bdevs_list": [ 00:11:16.802 { 00:11:16.802 "name": "spare", 00:11:16.802 "uuid": "7837677e-3c34-5690-8923-d94844018044", 00:11:16.802 "is_configured": true, 00:11:16.802 "data_offset": 2048, 00:11:16.802 "data_size": 63488 00:11:16.802 }, 00:11:16.802 { 00:11:16.802 "name": "BaseBdev2", 00:11:16.802 "uuid": "484a4084-b323-5f9f-aac2-e75e71674c6c", 00:11:16.802 "is_configured": true, 00:11:16.802 "data_offset": 2048, 00:11:16.802 "data_size": 63488 00:11:16.802 } 00:11:16.802 ] 00:11:16.802 }' 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.802 [2024-11-28 16:24:08.472104] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.802 "name": "raid_bdev1", 00:11:16.802 "uuid": "20cc3ba4-2d16-4705-9e1e-8e455955d40e", 00:11:16.802 "strip_size_kb": 0, 00:11:16.802 "state": "online", 00:11:16.802 "raid_level": "raid1", 00:11:16.802 "superblock": true, 00:11:16.802 "num_base_bdevs": 2, 00:11:16.802 "num_base_bdevs_discovered": 1, 00:11:16.802 "num_base_bdevs_operational": 1, 00:11:16.802 "base_bdevs_list": [ 00:11:16.802 { 00:11:16.802 "name": null, 00:11:16.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.802 "is_configured": false, 00:11:16.802 "data_offset": 0, 00:11:16.802 "data_size": 63488 00:11:16.802 }, 00:11:16.802 { 00:11:16.802 "name": "BaseBdev2", 00:11:16.802 "uuid": "484a4084-b323-5f9f-aac2-e75e71674c6c", 00:11:16.802 "is_configured": true, 00:11:16.802 "data_offset": 2048, 00:11:16.802 "data_size": 63488 00:11:16.802 } 00:11:16.802 ] 00:11:16.802 }' 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.802 16:24:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.390 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:17.390 16:24:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.390 16:24:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.390 [2024-11-28 16:24:08.891484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:17.390 [2024-11-28 16:24:08.891727] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:17.390 [2024-11-28 16:24:08.891803] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:17.390 [2024-11-28 16:24:08.891883] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:17.390 [2024-11-28 16:24:08.895876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1a10 00:11:17.390 16:24:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.390 16:24:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:17.390 [2024-11-28 16:24:08.897748] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:18.347 16:24:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:18.347 16:24:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:18.347 16:24:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:18.347 16:24:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:18.347 16:24:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:18.347 16:24:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.347 16:24:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.347 16:24:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.347 16:24:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.347 16:24:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.347 16:24:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:18.347 "name": "raid_bdev1", 00:11:18.347 "uuid": "20cc3ba4-2d16-4705-9e1e-8e455955d40e", 00:11:18.347 "strip_size_kb": 0, 00:11:18.347 "state": "online", 00:11:18.347 "raid_level": "raid1", 00:11:18.347 "superblock": true, 00:11:18.347 "num_base_bdevs": 2, 00:11:18.347 "num_base_bdevs_discovered": 2, 00:11:18.347 "num_base_bdevs_operational": 2, 00:11:18.347 "process": { 00:11:18.347 "type": "rebuild", 00:11:18.347 "target": "spare", 00:11:18.347 "progress": { 00:11:18.347 "blocks": 20480, 00:11:18.347 "percent": 32 00:11:18.347 } 00:11:18.347 }, 00:11:18.347 "base_bdevs_list": [ 00:11:18.347 { 00:11:18.347 "name": "spare", 00:11:18.347 "uuid": "7837677e-3c34-5690-8923-d94844018044", 00:11:18.347 "is_configured": true, 00:11:18.347 "data_offset": 2048, 00:11:18.347 "data_size": 63488 00:11:18.347 }, 00:11:18.347 { 00:11:18.347 "name": "BaseBdev2", 00:11:18.347 "uuid": "484a4084-b323-5f9f-aac2-e75e71674c6c", 00:11:18.347 "is_configured": true, 00:11:18.347 "data_offset": 2048, 00:11:18.347 "data_size": 63488 00:11:18.347 } 00:11:18.347 ] 00:11:18.347 }' 00:11:18.347 16:24:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:18.347 16:24:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:18.347 16:24:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:18.347 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:18.347 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:18.347 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.347 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.347 [2024-11-28 16:24:10.050688] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:18.347 [2024-11-28 16:24:10.101764] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:18.347 [2024-11-28 16:24:10.101883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.347 [2024-11-28 16:24:10.101923] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:18.347 [2024-11-28 16:24:10.101944] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:18.347 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.347 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:18.347 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.347 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.347 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.347 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.347 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:18.347 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.347 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.347 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.347 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.607 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.607 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.607 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.607 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.607 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.607 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.607 "name": "raid_bdev1", 00:11:18.607 "uuid": "20cc3ba4-2d16-4705-9e1e-8e455955d40e", 00:11:18.607 "strip_size_kb": 0, 00:11:18.607 "state": "online", 00:11:18.607 "raid_level": "raid1", 00:11:18.607 "superblock": true, 00:11:18.607 "num_base_bdevs": 2, 00:11:18.607 "num_base_bdevs_discovered": 1, 00:11:18.607 "num_base_bdevs_operational": 1, 00:11:18.607 "base_bdevs_list": [ 00:11:18.607 { 00:11:18.607 "name": null, 00:11:18.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.607 "is_configured": false, 00:11:18.607 "data_offset": 0, 00:11:18.607 "data_size": 63488 00:11:18.607 }, 00:11:18.607 { 00:11:18.607 "name": "BaseBdev2", 00:11:18.607 "uuid": "484a4084-b323-5f9f-aac2-e75e71674c6c", 00:11:18.607 "is_configured": true, 00:11:18.607 "data_offset": 2048, 00:11:18.607 "data_size": 63488 00:11:18.607 } 00:11:18.607 ] 00:11:18.607 }' 00:11:18.607 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.607 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.867 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:18.867 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.867 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.867 [2024-11-28 16:24:10.529436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:18.867 [2024-11-28 16:24:10.529547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.867 [2024-11-28 16:24:10.529587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:18.867 [2024-11-28 16:24:10.529615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.867 [2024-11-28 16:24:10.530076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.867 [2024-11-28 16:24:10.530132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:18.867 [2024-11-28 16:24:10.530242] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:18.867 [2024-11-28 16:24:10.530280] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:18.867 [2024-11-28 16:24:10.530327] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:18.867 [2024-11-28 16:24:10.530379] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:18.867 [2024-11-28 16:24:10.534269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:11:18.867 spare 00:11:18.867 16:24:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.867 16:24:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:18.867 [2024-11-28 16:24:10.536110] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:19.811 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:19.811 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:19.811 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:19.811 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:19.811 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:19.811 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.811 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.811 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.811 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.811 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.070 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:20.070 "name": "raid_bdev1", 00:11:20.070 "uuid": "20cc3ba4-2d16-4705-9e1e-8e455955d40e", 00:11:20.070 "strip_size_kb": 0, 00:11:20.070 "state": "online", 00:11:20.070 "raid_level": "raid1", 00:11:20.070 "superblock": true, 00:11:20.070 "num_base_bdevs": 2, 00:11:20.070 "num_base_bdevs_discovered": 2, 00:11:20.070 "num_base_bdevs_operational": 2, 00:11:20.070 "process": { 00:11:20.070 "type": "rebuild", 00:11:20.070 "target": "spare", 00:11:20.070 "progress": { 00:11:20.070 "blocks": 20480, 00:11:20.070 "percent": 32 00:11:20.070 } 00:11:20.070 }, 00:11:20.070 "base_bdevs_list": [ 00:11:20.070 { 00:11:20.070 "name": "spare", 00:11:20.070 "uuid": "7837677e-3c34-5690-8923-d94844018044", 00:11:20.070 "is_configured": true, 00:11:20.070 "data_offset": 2048, 00:11:20.070 "data_size": 63488 00:11:20.070 }, 00:11:20.070 { 00:11:20.070 "name": "BaseBdev2", 00:11:20.070 "uuid": "484a4084-b323-5f9f-aac2-e75e71674c6c", 00:11:20.070 "is_configured": true, 00:11:20.070 "data_offset": 2048, 00:11:20.070 "data_size": 63488 00:11:20.070 } 00:11:20.070 ] 00:11:20.070 }' 00:11:20.070 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:20.070 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:20.070 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:20.070 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:20.070 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:20.070 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.070 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.070 [2024-11-28 16:24:11.696322] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:20.070 [2024-11-28 16:24:11.740166] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:20.070 [2024-11-28 16:24:11.740229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.070 [2024-11-28 16:24:11.740244] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:20.070 [2024-11-28 16:24:11.740253] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:20.070 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.070 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:20.070 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.070 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.070 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.070 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.070 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:20.070 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.071 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.071 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.071 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.071 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.071 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.071 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.071 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.071 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.071 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.071 "name": "raid_bdev1", 00:11:20.071 "uuid": "20cc3ba4-2d16-4705-9e1e-8e455955d40e", 00:11:20.071 "strip_size_kb": 0, 00:11:20.071 "state": "online", 00:11:20.071 "raid_level": "raid1", 00:11:20.071 "superblock": true, 00:11:20.071 "num_base_bdevs": 2, 00:11:20.071 "num_base_bdevs_discovered": 1, 00:11:20.071 "num_base_bdevs_operational": 1, 00:11:20.071 "base_bdevs_list": [ 00:11:20.071 { 00:11:20.071 "name": null, 00:11:20.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.071 "is_configured": false, 00:11:20.071 "data_offset": 0, 00:11:20.071 "data_size": 63488 00:11:20.071 }, 00:11:20.071 { 00:11:20.071 "name": "BaseBdev2", 00:11:20.071 "uuid": "484a4084-b323-5f9f-aac2-e75e71674c6c", 00:11:20.071 "is_configured": true, 00:11:20.071 "data_offset": 2048, 00:11:20.071 "data_size": 63488 00:11:20.071 } 00:11:20.071 ] 00:11:20.071 }' 00:11:20.071 16:24:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.071 16:24:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.639 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:20.639 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:20.639 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:20.639 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:20.639 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:20.639 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.639 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.639 16:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.639 16:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.639 16:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.639 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:20.639 "name": "raid_bdev1", 00:11:20.639 "uuid": "20cc3ba4-2d16-4705-9e1e-8e455955d40e", 00:11:20.639 "strip_size_kb": 0, 00:11:20.639 "state": "online", 00:11:20.639 "raid_level": "raid1", 00:11:20.639 "superblock": true, 00:11:20.639 "num_base_bdevs": 2, 00:11:20.639 "num_base_bdevs_discovered": 1, 00:11:20.639 "num_base_bdevs_operational": 1, 00:11:20.639 "base_bdevs_list": [ 00:11:20.639 { 00:11:20.639 "name": null, 00:11:20.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.639 "is_configured": false, 00:11:20.639 "data_offset": 0, 00:11:20.639 "data_size": 63488 00:11:20.640 }, 00:11:20.640 { 00:11:20.640 "name": "BaseBdev2", 00:11:20.640 "uuid": "484a4084-b323-5f9f-aac2-e75e71674c6c", 00:11:20.640 "is_configured": true, 00:11:20.640 "data_offset": 2048, 00:11:20.640 "data_size": 63488 00:11:20.640 } 00:11:20.640 ] 00:11:20.640 }' 00:11:20.640 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:20.640 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:20.640 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:20.640 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:20.640 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:20.640 16:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.640 16:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.640 16:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.640 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:20.640 16:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.640 16:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.640 [2024-11-28 16:24:12.383407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:20.640 [2024-11-28 16:24:12.383468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.640 [2024-11-28 16:24:12.383488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:20.640 [2024-11-28 16:24:12.383500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.640 [2024-11-28 16:24:12.383961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.640 [2024-11-28 16:24:12.383985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:20.640 [2024-11-28 16:24:12.384059] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:20.640 [2024-11-28 16:24:12.384103] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:20.640 [2024-11-28 16:24:12.384112] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:20.640 [2024-11-28 16:24:12.384135] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:20.640 BaseBdev1 00:11:20.640 16:24:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.640 16:24:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:22.022 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:22.022 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.022 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.022 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.022 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.022 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:22.022 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.022 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.022 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.022 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.022 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.022 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.022 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.022 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.022 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.022 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.022 "name": "raid_bdev1", 00:11:22.022 "uuid": "20cc3ba4-2d16-4705-9e1e-8e455955d40e", 00:11:22.022 "strip_size_kb": 0, 00:11:22.022 "state": "online", 00:11:22.022 "raid_level": "raid1", 00:11:22.022 "superblock": true, 00:11:22.022 "num_base_bdevs": 2, 00:11:22.022 "num_base_bdevs_discovered": 1, 00:11:22.022 "num_base_bdevs_operational": 1, 00:11:22.022 "base_bdevs_list": [ 00:11:22.022 { 00:11:22.022 "name": null, 00:11:22.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.022 "is_configured": false, 00:11:22.022 "data_offset": 0, 00:11:22.022 "data_size": 63488 00:11:22.023 }, 00:11:22.023 { 00:11:22.023 "name": "BaseBdev2", 00:11:22.023 "uuid": "484a4084-b323-5f9f-aac2-e75e71674c6c", 00:11:22.023 "is_configured": true, 00:11:22.023 "data_offset": 2048, 00:11:22.023 "data_size": 63488 00:11:22.023 } 00:11:22.023 ] 00:11:22.023 }' 00:11:22.023 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.023 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:22.283 "name": "raid_bdev1", 00:11:22.283 "uuid": "20cc3ba4-2d16-4705-9e1e-8e455955d40e", 00:11:22.283 "strip_size_kb": 0, 00:11:22.283 "state": "online", 00:11:22.283 "raid_level": "raid1", 00:11:22.283 "superblock": true, 00:11:22.283 "num_base_bdevs": 2, 00:11:22.283 "num_base_bdevs_discovered": 1, 00:11:22.283 "num_base_bdevs_operational": 1, 00:11:22.283 "base_bdevs_list": [ 00:11:22.283 { 00:11:22.283 "name": null, 00:11:22.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.283 "is_configured": false, 00:11:22.283 "data_offset": 0, 00:11:22.283 "data_size": 63488 00:11:22.283 }, 00:11:22.283 { 00:11:22.283 "name": "BaseBdev2", 00:11:22.283 "uuid": "484a4084-b323-5f9f-aac2-e75e71674c6c", 00:11:22.283 "is_configured": true, 00:11:22.283 "data_offset": 2048, 00:11:22.283 "data_size": 63488 00:11:22.283 } 00:11:22.283 ] 00:11:22.283 }' 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.283 [2024-11-28 16:24:13.980703] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.283 [2024-11-28 16:24:13.980873] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:22.283 [2024-11-28 16:24:13.980886] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:22.283 request: 00:11:22.283 { 00:11:22.283 "base_bdev": "BaseBdev1", 00:11:22.283 "raid_bdev": "raid_bdev1", 00:11:22.283 "method": "bdev_raid_add_base_bdev", 00:11:22.283 "req_id": 1 00:11:22.283 } 00:11:22.283 Got JSON-RPC error response 00:11:22.283 response: 00:11:22.283 { 00:11:22.283 "code": -22, 00:11:22.283 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:22.283 } 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:22.283 16:24:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:23.663 16:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:23.663 16:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.663 16:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.663 16:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.663 16:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.663 16:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:23.663 16:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.663 16:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.663 16:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.663 16:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.663 16:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.663 16:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.663 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.663 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.663 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.663 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.663 "name": "raid_bdev1", 00:11:23.663 "uuid": "20cc3ba4-2d16-4705-9e1e-8e455955d40e", 00:11:23.663 "strip_size_kb": 0, 00:11:23.663 "state": "online", 00:11:23.663 "raid_level": "raid1", 00:11:23.663 "superblock": true, 00:11:23.663 "num_base_bdevs": 2, 00:11:23.663 "num_base_bdevs_discovered": 1, 00:11:23.663 "num_base_bdevs_operational": 1, 00:11:23.663 "base_bdevs_list": [ 00:11:23.663 { 00:11:23.663 "name": null, 00:11:23.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.663 "is_configured": false, 00:11:23.663 "data_offset": 0, 00:11:23.663 "data_size": 63488 00:11:23.663 }, 00:11:23.663 { 00:11:23.663 "name": "BaseBdev2", 00:11:23.663 "uuid": "484a4084-b323-5f9f-aac2-e75e71674c6c", 00:11:23.663 "is_configured": true, 00:11:23.663 "data_offset": 2048, 00:11:23.663 "data_size": 63488 00:11:23.663 } 00:11:23.663 ] 00:11:23.663 }' 00:11:23.663 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.663 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.923 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:23.923 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:23.923 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:23.923 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:23.923 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:23.923 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.923 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.923 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.923 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.923 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.923 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:23.923 "name": "raid_bdev1", 00:11:23.923 "uuid": "20cc3ba4-2d16-4705-9e1e-8e455955d40e", 00:11:23.923 "strip_size_kb": 0, 00:11:23.923 "state": "online", 00:11:23.923 "raid_level": "raid1", 00:11:23.923 "superblock": true, 00:11:23.923 "num_base_bdevs": 2, 00:11:23.923 "num_base_bdevs_discovered": 1, 00:11:23.923 "num_base_bdevs_operational": 1, 00:11:23.923 "base_bdevs_list": [ 00:11:23.923 { 00:11:23.923 "name": null, 00:11:23.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.923 "is_configured": false, 00:11:23.923 "data_offset": 0, 00:11:23.923 "data_size": 63488 00:11:23.923 }, 00:11:23.923 { 00:11:23.923 "name": "BaseBdev2", 00:11:23.923 "uuid": "484a4084-b323-5f9f-aac2-e75e71674c6c", 00:11:23.923 "is_configured": true, 00:11:23.923 "data_offset": 2048, 00:11:23.923 "data_size": 63488 00:11:23.923 } 00:11:23.923 ] 00:11:23.923 }' 00:11:23.923 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:23.923 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:23.923 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:23.923 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:23.923 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86426 00:11:23.924 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 86426 ']' 00:11:23.924 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 86426 00:11:23.924 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:23.924 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:23.924 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86426 00:11:23.924 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:23.924 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:23.924 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86426' 00:11:23.924 killing process with pid 86426 00:11:23.924 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 86426 00:11:23.924 Received shutdown signal, test time was about 60.000000 seconds 00:11:23.924 00:11:23.924 Latency(us) 00:11:23.924 [2024-11-28T16:24:15.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:23.924 [2024-11-28T16:24:15.695Z] =================================================================================================================== 00:11:23.924 [2024-11-28T16:24:15.695Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:23.924 [2024-11-28 16:24:15.620776] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:23.924 [2024-11-28 16:24:15.620922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.924 [2024-11-28 16:24:15.620976] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.924 [2024-11-28 16:24:15.620985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:11:23.924 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 86426 00:11:23.924 [2024-11-28 16:24:15.652083] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:24.184 16:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:11:24.184 00:11:24.184 real 0m21.113s 00:11:24.184 user 0m26.292s 00:11:24.184 sys 0m3.640s 00:11:24.184 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:24.184 ************************************ 00:11:24.184 END TEST raid_rebuild_test_sb 00:11:24.184 ************************************ 00:11:24.184 16:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.184 16:24:15 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:11:24.184 16:24:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:24.184 16:24:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:24.184 16:24:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:24.444 ************************************ 00:11:24.444 START TEST raid_rebuild_test_io 00:11:24.444 ************************************ 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87131 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87131 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 87131 ']' 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:24.444 16:24:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.444 [2024-11-28 16:24:16.057770] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:24.444 [2024-11-28 16:24:16.057987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87131 ] 00:11:24.444 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:24.444 Zero copy mechanism will not be used. 00:11:24.704 [2024-11-28 16:24:16.218371] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.704 [2024-11-28 16:24:16.263630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.704 [2024-11-28 16:24:16.305612] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.704 [2024-11-28 16:24:16.305719] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.273 16:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:25.273 16:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:11:25.273 16:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:25.273 16:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:25.273 16:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.273 16:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.273 BaseBdev1_malloc 00:11:25.273 16:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.273 16:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:25.273 16:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.273 16:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.273 [2024-11-28 16:24:16.915806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:25.273 [2024-11-28 16:24:16.915881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.273 [2024-11-28 16:24:16.915914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:25.273 [2024-11-28 16:24:16.915927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.273 [2024-11-28 16:24:16.918064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.273 [2024-11-28 16:24:16.918101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:25.273 BaseBdev1 00:11:25.273 16:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.273 16:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:25.273 16:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:25.273 16:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.273 16:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.273 BaseBdev2_malloc 00:11:25.273 16:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.273 16:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:25.273 16:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.273 16:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.273 [2024-11-28 16:24:16.964905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:25.273 [2024-11-28 16:24:16.965008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.273 [2024-11-28 16:24:16.965054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:25.273 [2024-11-28 16:24:16.965075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.273 [2024-11-28 16:24:16.969921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.273 [2024-11-28 16:24:16.970084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:25.273 BaseBdev2 00:11:25.273 16:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.274 16:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:25.274 16:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.274 16:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.274 spare_malloc 00:11:25.274 16:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.274 16:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:25.274 16:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.274 16:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.274 spare_delay 00:11:25.274 16:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.274 16:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:25.274 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.274 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.274 [2024-11-28 16:24:17.008074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:25.274 [2024-11-28 16:24:17.008165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.274 [2024-11-28 16:24:17.008220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:25.274 [2024-11-28 16:24:17.008248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.274 [2024-11-28 16:24:17.010341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.274 [2024-11-28 16:24:17.010406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:25.274 spare 00:11:25.274 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.274 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:25.274 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.274 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.274 [2024-11-28 16:24:17.020095] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.274 [2024-11-28 16:24:17.021954] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.274 [2024-11-28 16:24:17.022078] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:25.274 [2024-11-28 16:24:17.022111] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:25.274 [2024-11-28 16:24:17.022402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:25.274 [2024-11-28 16:24:17.022520] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:25.274 [2024-11-28 16:24:17.022534] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:25.274 [2024-11-28 16:24:17.022664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.274 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.274 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:25.274 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.274 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.274 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.274 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.274 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:25.274 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.274 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.274 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.274 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.274 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.274 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.274 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.274 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.534 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.534 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.534 "name": "raid_bdev1", 00:11:25.534 "uuid": "f3a2da94-2282-4dc9-bc9b-d34db852fd4a", 00:11:25.534 "strip_size_kb": 0, 00:11:25.534 "state": "online", 00:11:25.534 "raid_level": "raid1", 00:11:25.534 "superblock": false, 00:11:25.534 "num_base_bdevs": 2, 00:11:25.534 "num_base_bdevs_discovered": 2, 00:11:25.534 "num_base_bdevs_operational": 2, 00:11:25.534 "base_bdevs_list": [ 00:11:25.534 { 00:11:25.534 "name": "BaseBdev1", 00:11:25.534 "uuid": "8819651e-b106-5e7d-96d9-392e0520eef5", 00:11:25.534 "is_configured": true, 00:11:25.534 "data_offset": 0, 00:11:25.534 "data_size": 65536 00:11:25.534 }, 00:11:25.534 { 00:11:25.534 "name": "BaseBdev2", 00:11:25.534 "uuid": "ee46a42f-701b-5f95-a747-b0e7917795e7", 00:11:25.534 "is_configured": true, 00:11:25.534 "data_offset": 0, 00:11:25.534 "data_size": 65536 00:11:25.534 } 00:11:25.534 ] 00:11:25.534 }' 00:11:25.534 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.534 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:25.794 [2024-11-28 16:24:17.459679] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.794 [2024-11-28 16:24:17.535239] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.794 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.055 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.055 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.055 "name": "raid_bdev1", 00:11:26.055 "uuid": "f3a2da94-2282-4dc9-bc9b-d34db852fd4a", 00:11:26.055 "strip_size_kb": 0, 00:11:26.055 "state": "online", 00:11:26.055 "raid_level": "raid1", 00:11:26.055 "superblock": false, 00:11:26.055 "num_base_bdevs": 2, 00:11:26.055 "num_base_bdevs_discovered": 1, 00:11:26.055 "num_base_bdevs_operational": 1, 00:11:26.055 "base_bdevs_list": [ 00:11:26.055 { 00:11:26.055 "name": null, 00:11:26.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.055 "is_configured": false, 00:11:26.055 "data_offset": 0, 00:11:26.055 "data_size": 65536 00:11:26.055 }, 00:11:26.055 { 00:11:26.055 "name": "BaseBdev2", 00:11:26.055 "uuid": "ee46a42f-701b-5f95-a747-b0e7917795e7", 00:11:26.055 "is_configured": true, 00:11:26.055 "data_offset": 0, 00:11:26.055 "data_size": 65536 00:11:26.055 } 00:11:26.055 ] 00:11:26.055 }' 00:11:26.055 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.055 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.055 [2024-11-28 16:24:17.625054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:26.055 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:26.055 Zero copy mechanism will not be used. 00:11:26.055 Running I/O for 60 seconds... 00:11:26.314 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:26.314 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.314 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.314 [2024-11-28 16:24:17.934938] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:26.314 16:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.314 16:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:26.314 [2024-11-28 16:24:17.965364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:26.314 [2024-11-28 16:24:17.967343] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:26.314 [2024-11-28 16:24:18.079079] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:26.314 [2024-11-28 16:24:18.079366] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:26.574 [2024-11-28 16:24:18.310177] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:26.574 [2024-11-28 16:24:18.310439] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:26.834 [2024-11-28 16:24:18.558818] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:26.834 [2024-11-28 16:24:18.559227] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:27.093 197.00 IOPS, 591.00 MiB/s [2024-11-28T16:24:18.864Z] [2024-11-28 16:24:18.766931] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:27.093 [2024-11-28 16:24:18.767208] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:27.351 16:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:27.351 16:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:27.351 16:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:27.351 16:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:27.351 16:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:27.351 16:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.351 16:24:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.351 16:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.351 16:24:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.351 16:24:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.351 [2024-11-28 16:24:19.010402] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:27.351 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:27.351 "name": "raid_bdev1", 00:11:27.351 "uuid": "f3a2da94-2282-4dc9-bc9b-d34db852fd4a", 00:11:27.351 "strip_size_kb": 0, 00:11:27.351 "state": "online", 00:11:27.351 "raid_level": "raid1", 00:11:27.351 "superblock": false, 00:11:27.351 "num_base_bdevs": 2, 00:11:27.351 "num_base_bdevs_discovered": 2, 00:11:27.351 "num_base_bdevs_operational": 2, 00:11:27.351 "process": { 00:11:27.351 "type": "rebuild", 00:11:27.351 "target": "spare", 00:11:27.351 "progress": { 00:11:27.351 "blocks": 12288, 00:11:27.351 "percent": 18 00:11:27.351 } 00:11:27.351 }, 00:11:27.351 "base_bdevs_list": [ 00:11:27.351 { 00:11:27.351 "name": "spare", 00:11:27.351 "uuid": "84aeee63-024c-5a2b-8b32-7596382161aa", 00:11:27.351 "is_configured": true, 00:11:27.351 "data_offset": 0, 00:11:27.351 "data_size": 65536 00:11:27.351 }, 00:11:27.351 { 00:11:27.351 "name": "BaseBdev2", 00:11:27.351 "uuid": "ee46a42f-701b-5f95-a747-b0e7917795e7", 00:11:27.351 "is_configured": true, 00:11:27.351 "data_offset": 0, 00:11:27.351 "data_size": 65536 00:11:27.351 } 00:11:27.351 ] 00:11:27.351 }' 00:11:27.351 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:27.351 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:27.351 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:27.352 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:27.352 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:27.352 16:24:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.352 16:24:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.352 [2024-11-28 16:24:19.095472] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:27.352 [2024-11-28 16:24:19.118681] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:27.611 [2024-11-28 16:24:19.236841] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:27.611 [2024-11-28 16:24:19.244284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.611 [2024-11-28 16:24:19.244324] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:27.611 [2024-11-28 16:24:19.244336] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:27.611 [2024-11-28 16:24:19.267109] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:11:27.611 16:24:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.611 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:27.611 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.611 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.611 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.611 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.611 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:27.611 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.611 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.611 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.611 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.611 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.611 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.611 16:24:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.611 16:24:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.611 16:24:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.611 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.611 "name": "raid_bdev1", 00:11:27.611 "uuid": "f3a2da94-2282-4dc9-bc9b-d34db852fd4a", 00:11:27.611 "strip_size_kb": 0, 00:11:27.611 "state": "online", 00:11:27.611 "raid_level": "raid1", 00:11:27.611 "superblock": false, 00:11:27.611 "num_base_bdevs": 2, 00:11:27.611 "num_base_bdevs_discovered": 1, 00:11:27.611 "num_base_bdevs_operational": 1, 00:11:27.611 "base_bdevs_list": [ 00:11:27.611 { 00:11:27.611 "name": null, 00:11:27.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.611 "is_configured": false, 00:11:27.611 "data_offset": 0, 00:11:27.611 "data_size": 65536 00:11:27.611 }, 00:11:27.611 { 00:11:27.611 "name": "BaseBdev2", 00:11:27.611 "uuid": "ee46a42f-701b-5f95-a747-b0e7917795e7", 00:11:27.611 "is_configured": true, 00:11:27.611 "data_offset": 0, 00:11:27.611 "data_size": 65536 00:11:27.611 } 00:11:27.611 ] 00:11:27.611 }' 00:11:27.611 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.611 16:24:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.130 190.50 IOPS, 571.50 MiB/s [2024-11-28T16:24:19.901Z] 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:28.130 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:28.130 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:28.130 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:28.130 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:28.130 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.130 16:24:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.130 16:24:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.130 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.130 16:24:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.130 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:28.130 "name": "raid_bdev1", 00:11:28.130 "uuid": "f3a2da94-2282-4dc9-bc9b-d34db852fd4a", 00:11:28.130 "strip_size_kb": 0, 00:11:28.130 "state": "online", 00:11:28.130 "raid_level": "raid1", 00:11:28.130 "superblock": false, 00:11:28.130 "num_base_bdevs": 2, 00:11:28.130 "num_base_bdevs_discovered": 1, 00:11:28.130 "num_base_bdevs_operational": 1, 00:11:28.130 "base_bdevs_list": [ 00:11:28.130 { 00:11:28.130 "name": null, 00:11:28.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.130 "is_configured": false, 00:11:28.130 "data_offset": 0, 00:11:28.130 "data_size": 65536 00:11:28.130 }, 00:11:28.130 { 00:11:28.130 "name": "BaseBdev2", 00:11:28.130 "uuid": "ee46a42f-701b-5f95-a747-b0e7917795e7", 00:11:28.130 "is_configured": true, 00:11:28.130 "data_offset": 0, 00:11:28.130 "data_size": 65536 00:11:28.130 } 00:11:28.130 ] 00:11:28.130 }' 00:11:28.130 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:28.130 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:28.130 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:28.130 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:28.130 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:28.130 16:24:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.130 16:24:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.130 [2024-11-28 16:24:19.826873] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:28.130 16:24:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.130 16:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:28.130 [2024-11-28 16:24:19.881918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:28.130 [2024-11-28 16:24:19.883689] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:28.389 [2024-11-28 16:24:19.994985] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:28.389 [2024-11-28 16:24:19.995344] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:28.649 [2024-11-28 16:24:20.208264] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:28.649 [2024-11-28 16:24:20.208562] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:28.907 168.33 IOPS, 505.00 MiB/s [2024-11-28T16:24:20.678Z] [2024-11-28 16:24:20.667874] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:28.907 [2024-11-28 16:24:20.668106] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:29.166 16:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:29.166 16:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:29.166 16:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:29.166 16:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:29.166 16:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:29.166 16:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.166 16:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.166 16:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.166 16:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:29.166 16:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.166 16:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:29.166 "name": "raid_bdev1", 00:11:29.166 "uuid": "f3a2da94-2282-4dc9-bc9b-d34db852fd4a", 00:11:29.166 "strip_size_kb": 0, 00:11:29.166 "state": "online", 00:11:29.166 "raid_level": "raid1", 00:11:29.166 "superblock": false, 00:11:29.166 "num_base_bdevs": 2, 00:11:29.166 "num_base_bdevs_discovered": 2, 00:11:29.166 "num_base_bdevs_operational": 2, 00:11:29.166 "process": { 00:11:29.166 "type": "rebuild", 00:11:29.166 "target": "spare", 00:11:29.166 "progress": { 00:11:29.166 "blocks": 10240, 00:11:29.166 "percent": 15 00:11:29.166 } 00:11:29.166 }, 00:11:29.166 "base_bdevs_list": [ 00:11:29.166 { 00:11:29.166 "name": "spare", 00:11:29.166 "uuid": "84aeee63-024c-5a2b-8b32-7596382161aa", 00:11:29.166 "is_configured": true, 00:11:29.166 "data_offset": 0, 00:11:29.166 "data_size": 65536 00:11:29.166 }, 00:11:29.166 { 00:11:29.166 "name": "BaseBdev2", 00:11:29.166 "uuid": "ee46a42f-701b-5f95-a747-b0e7917795e7", 00:11:29.166 "is_configured": true, 00:11:29.166 "data_offset": 0, 00:11:29.166 "data_size": 65536 00:11:29.166 } 00:11:29.166 ] 00:11:29.166 }' 00:11:29.166 16:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:29.427 16:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:29.427 16:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:29.427 [2024-11-28 16:24:20.999697] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:29.427 16:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:29.427 16:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:29.427 16:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:29.427 16:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:29.427 16:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:29.427 16:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=320 00:11:29.427 16:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:29.427 16:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:29.427 16:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:29.427 16:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:29.427 16:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:29.427 16:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:29.427 16:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.427 16:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.427 16:24:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.427 16:24:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:29.427 16:24:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.427 16:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:29.427 "name": "raid_bdev1", 00:11:29.427 "uuid": "f3a2da94-2282-4dc9-bc9b-d34db852fd4a", 00:11:29.427 "strip_size_kb": 0, 00:11:29.427 "state": "online", 00:11:29.427 "raid_level": "raid1", 00:11:29.427 "superblock": false, 00:11:29.427 "num_base_bdevs": 2, 00:11:29.427 "num_base_bdevs_discovered": 2, 00:11:29.427 "num_base_bdevs_operational": 2, 00:11:29.427 "process": { 00:11:29.427 "type": "rebuild", 00:11:29.427 "target": "spare", 00:11:29.427 "progress": { 00:11:29.427 "blocks": 14336, 00:11:29.427 "percent": 21 00:11:29.427 } 00:11:29.427 }, 00:11:29.427 "base_bdevs_list": [ 00:11:29.427 { 00:11:29.427 "name": "spare", 00:11:29.427 "uuid": "84aeee63-024c-5a2b-8b32-7596382161aa", 00:11:29.427 "is_configured": true, 00:11:29.427 "data_offset": 0, 00:11:29.427 "data_size": 65536 00:11:29.427 }, 00:11:29.427 { 00:11:29.427 "name": "BaseBdev2", 00:11:29.427 "uuid": "ee46a42f-701b-5f95-a747-b0e7917795e7", 00:11:29.427 "is_configured": true, 00:11:29.427 "data_offset": 0, 00:11:29.427 "data_size": 65536 00:11:29.427 } 00:11:29.427 ] 00:11:29.427 }' 00:11:29.427 16:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:29.427 16:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:29.427 16:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:29.427 16:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:29.427 16:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:29.703 [2024-11-28 16:24:21.206062] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:29.703 [2024-11-28 16:24:21.206358] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:30.227 135.50 IOPS, 406.50 MiB/s [2024-11-28T16:24:21.998Z] [2024-11-28 16:24:21.824529] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:30.227 [2024-11-28 16:24:21.931377] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:30.486 [2024-11-28 16:24:22.165421] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:11:30.486 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:30.486 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:30.486 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:30.486 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:30.486 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:30.486 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:30.486 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.486 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.486 16:24:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.486 16:24:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.486 16:24:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.487 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:30.487 "name": "raid_bdev1", 00:11:30.487 "uuid": "f3a2da94-2282-4dc9-bc9b-d34db852fd4a", 00:11:30.487 "strip_size_kb": 0, 00:11:30.487 "state": "online", 00:11:30.487 "raid_level": "raid1", 00:11:30.487 "superblock": false, 00:11:30.487 "num_base_bdevs": 2, 00:11:30.487 "num_base_bdevs_discovered": 2, 00:11:30.487 "num_base_bdevs_operational": 2, 00:11:30.487 "process": { 00:11:30.487 "type": "rebuild", 00:11:30.487 "target": "spare", 00:11:30.487 "progress": { 00:11:30.487 "blocks": 32768, 00:11:30.487 "percent": 50 00:11:30.487 } 00:11:30.487 }, 00:11:30.487 "base_bdevs_list": [ 00:11:30.487 { 00:11:30.487 "name": "spare", 00:11:30.487 "uuid": "84aeee63-024c-5a2b-8b32-7596382161aa", 00:11:30.487 "is_configured": true, 00:11:30.487 "data_offset": 0, 00:11:30.487 "data_size": 65536 00:11:30.487 }, 00:11:30.487 { 00:11:30.487 "name": "BaseBdev2", 00:11:30.487 "uuid": "ee46a42f-701b-5f95-a747-b0e7917795e7", 00:11:30.487 "is_configured": true, 00:11:30.487 "data_offset": 0, 00:11:30.487 "data_size": 65536 00:11:30.487 } 00:11:30.487 ] 00:11:30.487 }' 00:11:30.487 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:30.746 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:30.746 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:30.746 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:30.746 16:24:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:31.005 116.20 IOPS, 348.60 MiB/s [2024-11-28T16:24:22.776Z] [2024-11-28 16:24:22.703083] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:11:31.005 [2024-11-28 16:24:22.703298] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:11:31.265 [2024-11-28 16:24:23.015577] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:11:31.525 [2024-11-28 16:24:23.221648] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:31.525 [2024-11-28 16:24:23.221907] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:31.785 16:24:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:31.785 16:24:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:31.785 16:24:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:31.785 16:24:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:31.785 16:24:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:31.785 16:24:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:31.785 16:24:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.785 16:24:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.785 16:24:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.785 16:24:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.785 16:24:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.785 16:24:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:31.785 "name": "raid_bdev1", 00:11:31.785 "uuid": "f3a2da94-2282-4dc9-bc9b-d34db852fd4a", 00:11:31.785 "strip_size_kb": 0, 00:11:31.785 "state": "online", 00:11:31.785 "raid_level": "raid1", 00:11:31.785 "superblock": false, 00:11:31.785 "num_base_bdevs": 2, 00:11:31.785 "num_base_bdevs_discovered": 2, 00:11:31.785 "num_base_bdevs_operational": 2, 00:11:31.785 "process": { 00:11:31.785 "type": "rebuild", 00:11:31.785 "target": "spare", 00:11:31.785 "progress": { 00:11:31.785 "blocks": 47104, 00:11:31.785 "percent": 71 00:11:31.785 } 00:11:31.785 }, 00:11:31.785 "base_bdevs_list": [ 00:11:31.785 { 00:11:31.785 "name": "spare", 00:11:31.785 "uuid": "84aeee63-024c-5a2b-8b32-7596382161aa", 00:11:31.785 "is_configured": true, 00:11:31.785 "data_offset": 0, 00:11:31.785 "data_size": 65536 00:11:31.785 }, 00:11:31.785 { 00:11:31.785 "name": "BaseBdev2", 00:11:31.785 "uuid": "ee46a42f-701b-5f95-a747-b0e7917795e7", 00:11:31.785 "is_configured": true, 00:11:31.785 "data_offset": 0, 00:11:31.785 "data_size": 65536 00:11:31.785 } 00:11:31.785 ] 00:11:31.785 }' 00:11:31.785 16:24:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:31.785 16:24:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:31.785 16:24:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:31.785 16:24:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:31.785 16:24:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:32.044 102.33 IOPS, 307.00 MiB/s [2024-11-28T16:24:23.815Z] [2024-11-28 16:24:23.664604] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:11:32.302 [2024-11-28 16:24:23.991564] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:11:32.561 [2024-11-28 16:24:24.193904] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:11:32.821 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:32.821 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:32.821 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:32.821 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:32.821 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:32.821 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:32.821 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.821 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.821 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.821 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:32.821 16:24:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.821 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:32.821 "name": "raid_bdev1", 00:11:32.821 "uuid": "f3a2da94-2282-4dc9-bc9b-d34db852fd4a", 00:11:32.821 "strip_size_kb": 0, 00:11:32.821 "state": "online", 00:11:32.821 "raid_level": "raid1", 00:11:32.821 "superblock": false, 00:11:32.821 "num_base_bdevs": 2, 00:11:32.821 "num_base_bdevs_discovered": 2, 00:11:32.821 "num_base_bdevs_operational": 2, 00:11:32.821 "process": { 00:11:32.821 "type": "rebuild", 00:11:32.821 "target": "spare", 00:11:32.821 "progress": { 00:11:32.821 "blocks": 63488, 00:11:32.821 "percent": 96 00:11:32.821 } 00:11:32.821 }, 00:11:32.821 "base_bdevs_list": [ 00:11:32.821 { 00:11:32.821 "name": "spare", 00:11:32.821 "uuid": "84aeee63-024c-5a2b-8b32-7596382161aa", 00:11:32.821 "is_configured": true, 00:11:32.821 "data_offset": 0, 00:11:32.821 "data_size": 65536 00:11:32.821 }, 00:11:32.821 { 00:11:32.821 "name": "BaseBdev2", 00:11:32.821 "uuid": "ee46a42f-701b-5f95-a747-b0e7917795e7", 00:11:32.821 "is_configured": true, 00:11:32.821 "data_offset": 0, 00:11:32.821 "data_size": 65536 00:11:32.821 } 00:11:32.821 ] 00:11:32.821 }' 00:11:32.821 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:32.821 [2024-11-28 16:24:24.528725] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:32.821 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:32.821 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:33.081 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:33.081 16:24:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:33.081 92.43 IOPS, 277.29 MiB/s [2024-11-28T16:24:24.852Z] [2024-11-28 16:24:24.628509] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:33.081 [2024-11-28 16:24:24.630515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.018 85.50 IOPS, 256.50 MiB/s [2024-11-28T16:24:25.789Z] 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:34.018 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:34.018 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:34.018 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:34.018 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:34.018 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:34.018 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.018 16:24:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.018 16:24:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.018 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.018 16:24:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.018 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:34.018 "name": "raid_bdev1", 00:11:34.018 "uuid": "f3a2da94-2282-4dc9-bc9b-d34db852fd4a", 00:11:34.018 "strip_size_kb": 0, 00:11:34.018 "state": "online", 00:11:34.019 "raid_level": "raid1", 00:11:34.019 "superblock": false, 00:11:34.019 "num_base_bdevs": 2, 00:11:34.019 "num_base_bdevs_discovered": 2, 00:11:34.019 "num_base_bdevs_operational": 2, 00:11:34.019 "base_bdevs_list": [ 00:11:34.019 { 00:11:34.019 "name": "spare", 00:11:34.019 "uuid": "84aeee63-024c-5a2b-8b32-7596382161aa", 00:11:34.019 "is_configured": true, 00:11:34.019 "data_offset": 0, 00:11:34.019 "data_size": 65536 00:11:34.019 }, 00:11:34.019 { 00:11:34.019 "name": "BaseBdev2", 00:11:34.019 "uuid": "ee46a42f-701b-5f95-a747-b0e7917795e7", 00:11:34.019 "is_configured": true, 00:11:34.019 "data_offset": 0, 00:11:34.019 "data_size": 65536 00:11:34.019 } 00:11:34.019 ] 00:11:34.019 }' 00:11:34.019 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:34.019 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:34.019 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:34.019 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:34.019 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:11:34.019 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:34.019 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:34.019 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:34.019 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:34.019 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:34.019 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.019 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.019 16:24:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.019 16:24:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.019 16:24:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.278 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:34.278 "name": "raid_bdev1", 00:11:34.278 "uuid": "f3a2da94-2282-4dc9-bc9b-d34db852fd4a", 00:11:34.278 "strip_size_kb": 0, 00:11:34.278 "state": "online", 00:11:34.278 "raid_level": "raid1", 00:11:34.278 "superblock": false, 00:11:34.278 "num_base_bdevs": 2, 00:11:34.278 "num_base_bdevs_discovered": 2, 00:11:34.278 "num_base_bdevs_operational": 2, 00:11:34.278 "base_bdevs_list": [ 00:11:34.278 { 00:11:34.278 "name": "spare", 00:11:34.278 "uuid": "84aeee63-024c-5a2b-8b32-7596382161aa", 00:11:34.278 "is_configured": true, 00:11:34.278 "data_offset": 0, 00:11:34.278 "data_size": 65536 00:11:34.278 }, 00:11:34.278 { 00:11:34.278 "name": "BaseBdev2", 00:11:34.278 "uuid": "ee46a42f-701b-5f95-a747-b0e7917795e7", 00:11:34.278 "is_configured": true, 00:11:34.278 "data_offset": 0, 00:11:34.278 "data_size": 65536 00:11:34.278 } 00:11:34.278 ] 00:11:34.278 }' 00:11:34.278 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:34.278 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:34.278 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:34.278 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:34.278 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:34.278 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.278 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.278 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.278 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.278 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:34.278 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.278 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.278 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.278 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.278 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.278 16:24:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.278 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.278 16:24:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.278 16:24:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.278 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.278 "name": "raid_bdev1", 00:11:34.278 "uuid": "f3a2da94-2282-4dc9-bc9b-d34db852fd4a", 00:11:34.278 "strip_size_kb": 0, 00:11:34.278 "state": "online", 00:11:34.278 "raid_level": "raid1", 00:11:34.278 "superblock": false, 00:11:34.278 "num_base_bdevs": 2, 00:11:34.278 "num_base_bdevs_discovered": 2, 00:11:34.278 "num_base_bdevs_operational": 2, 00:11:34.278 "base_bdevs_list": [ 00:11:34.278 { 00:11:34.278 "name": "spare", 00:11:34.278 "uuid": "84aeee63-024c-5a2b-8b32-7596382161aa", 00:11:34.278 "is_configured": true, 00:11:34.278 "data_offset": 0, 00:11:34.278 "data_size": 65536 00:11:34.278 }, 00:11:34.278 { 00:11:34.278 "name": "BaseBdev2", 00:11:34.278 "uuid": "ee46a42f-701b-5f95-a747-b0e7917795e7", 00:11:34.278 "is_configured": true, 00:11:34.278 "data_offset": 0, 00:11:34.278 "data_size": 65536 00:11:34.278 } 00:11:34.278 ] 00:11:34.278 }' 00:11:34.278 16:24:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.278 16:24:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.537 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:34.537 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.537 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.537 [2024-11-28 16:24:26.302651] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:34.537 [2024-11-28 16:24:26.302766] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:34.797 00:11:34.797 Latency(us) 00:11:34.797 [2024-11-28T16:24:26.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:34.797 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:34.797 raid_bdev1 : 8.73 81.57 244.72 0.00 0.00 17083.16 282.61 114015.47 00:11:34.797 [2024-11-28T16:24:26.568Z] =================================================================================================================== 00:11:34.797 [2024-11-28T16:24:26.568Z] Total : 81.57 244.72 0.00 0.00 17083.16 282.61 114015.47 00:11:34.797 [2024-11-28 16:24:26.341865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.797 [2024-11-28 16:24:26.341954] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:34.797 [2024-11-28 16:24:26.342047] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:34.797 [2024-11-28 16:24:26.342110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:34.797 { 00:11:34.797 "results": [ 00:11:34.797 { 00:11:34.797 "job": "raid_bdev1", 00:11:34.797 "core_mask": "0x1", 00:11:34.797 "workload": "randrw", 00:11:34.797 "percentage": 50, 00:11:34.797 "status": "finished", 00:11:34.797 "queue_depth": 2, 00:11:34.797 "io_size": 3145728, 00:11:34.797 "runtime": 8.728182, 00:11:34.797 "iops": 81.57483425528936, 00:11:34.797 "mibps": 244.72450276586807, 00:11:34.797 "io_failed": 0, 00:11:34.797 "io_timeout": 0, 00:11:34.797 "avg_latency_us": 17083.164751484226, 00:11:34.797 "min_latency_us": 282.6061135371179, 00:11:34.797 "max_latency_us": 114015.46899563319 00:11:34.797 } 00:11:34.797 ], 00:11:34.797 "core_count": 1 00:11:34.797 } 00:11:34.797 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.797 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.797 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:34.797 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.797 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.797 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.797 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:34.797 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:34.797 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:34.797 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:34.797 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:34.797 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:34.797 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:34.797 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:34.797 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:34.797 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:34.797 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:34.797 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:34.797 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:35.057 /dev/nbd0 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:35.057 1+0 records in 00:11:35.057 1+0 records out 00:11:35.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297503 s, 13.8 MB/s 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:35.057 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:35.317 /dev/nbd1 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:35.317 1+0 records in 00:11:35.317 1+0 records out 00:11:35.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488703 s, 8.4 MB/s 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.317 16:24:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:35.578 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:35.578 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:35.578 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:35.578 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.578 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.578 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:35.578 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:35.578 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.578 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:35.578 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:35.578 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:35.578 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:35.578 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:35.578 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.578 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:35.838 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:35.838 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:35.838 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:35.838 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.838 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.838 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:35.838 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:35.838 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.838 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:35.838 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 87131 00:11:35.838 16:24:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 87131 ']' 00:11:35.838 16:24:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 87131 00:11:35.838 16:24:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:11:35.838 16:24:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:35.838 16:24:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87131 00:11:35.838 16:24:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:35.838 16:24:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:35.838 16:24:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87131' 00:11:35.838 killing process with pid 87131 00:11:35.838 16:24:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 87131 00:11:35.838 Received shutdown signal, test time was about 9.825770 seconds 00:11:35.838 00:11:35.838 Latency(us) 00:11:35.838 [2024-11-28T16:24:27.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:35.838 [2024-11-28T16:24:27.609Z] =================================================================================================================== 00:11:35.838 [2024-11-28T16:24:27.609Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:35.838 [2024-11-28 16:24:27.434015] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:35.838 16:24:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 87131 00:11:35.838 [2024-11-28 16:24:27.460238] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:36.098 16:24:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:36.098 00:11:36.098 real 0m11.732s 00:11:36.098 user 0m14.832s 00:11:36.098 sys 0m1.451s 00:11:36.098 16:24:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:36.098 16:24:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.098 ************************************ 00:11:36.098 END TEST raid_rebuild_test_io 00:11:36.098 ************************************ 00:11:36.098 16:24:27 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:11:36.098 16:24:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:36.098 16:24:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:36.098 16:24:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:36.098 ************************************ 00:11:36.098 START TEST raid_rebuild_test_sb_io 00:11:36.098 ************************************ 00:11:36.098 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:11:36.098 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:36.098 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:36.098 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:36.098 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:36.098 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87514 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87514 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 87514 ']' 00:11:36.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:36.099 16:24:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.099 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:36.099 Zero copy mechanism will not be used. 00:11:36.099 [2024-11-28 16:24:27.857318] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:36.099 [2024-11-28 16:24:27.857457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87514 ] 00:11:36.359 [2024-11-28 16:24:28.016010] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.359 [2024-11-28 16:24:28.062880] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.359 [2024-11-28 16:24:28.105423] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.359 [2024-11-28 16:24:28.105542] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.929 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:36.929 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:11:36.929 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:36.929 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:36.929 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.929 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.190 BaseBdev1_malloc 00:11:37.190 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.190 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:37.190 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.190 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.190 [2024-11-28 16:24:28.707989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:37.190 [2024-11-28 16:24:28.708117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.190 [2024-11-28 16:24:28.708150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:37.190 [2024-11-28 16:24:28.708165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.190 [2024-11-28 16:24:28.710198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.190 [2024-11-28 16:24:28.710247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:37.190 BaseBdev1 00:11:37.190 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.190 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:37.190 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:37.190 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.190 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.190 BaseBdev2_malloc 00:11:37.190 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.190 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:37.190 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.190 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.190 [2024-11-28 16:24:28.744728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:37.190 [2024-11-28 16:24:28.744844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.190 [2024-11-28 16:24:28.744872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:37.190 [2024-11-28 16:24:28.744882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.190 [2024-11-28 16:24:28.746952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.190 [2024-11-28 16:24:28.746984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:37.190 BaseBdev2 00:11:37.190 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.190 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:37.190 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.190 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.190 spare_malloc 00:11:37.190 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.191 spare_delay 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.191 [2024-11-28 16:24:28.785229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:37.191 [2024-11-28 16:24:28.785282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.191 [2024-11-28 16:24:28.785304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:37.191 [2024-11-28 16:24:28.785312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.191 [2024-11-28 16:24:28.787342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.191 [2024-11-28 16:24:28.787421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:37.191 spare 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.191 [2024-11-28 16:24:28.797249] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:37.191 [2024-11-28 16:24:28.799106] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:37.191 [2024-11-28 16:24:28.799254] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:37.191 [2024-11-28 16:24:28.799272] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:37.191 [2024-11-28 16:24:28.799508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:37.191 [2024-11-28 16:24:28.799627] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:37.191 [2024-11-28 16:24:28.799645] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:37.191 [2024-11-28 16:24:28.799770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.191 "name": "raid_bdev1", 00:11:37.191 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:37.191 "strip_size_kb": 0, 00:11:37.191 "state": "online", 00:11:37.191 "raid_level": "raid1", 00:11:37.191 "superblock": true, 00:11:37.191 "num_base_bdevs": 2, 00:11:37.191 "num_base_bdevs_discovered": 2, 00:11:37.191 "num_base_bdevs_operational": 2, 00:11:37.191 "base_bdevs_list": [ 00:11:37.191 { 00:11:37.191 "name": "BaseBdev1", 00:11:37.191 "uuid": "9bd955dd-5cb8-5c18-a83e-5083e29399e2", 00:11:37.191 "is_configured": true, 00:11:37.191 "data_offset": 2048, 00:11:37.191 "data_size": 63488 00:11:37.191 }, 00:11:37.191 { 00:11:37.191 "name": "BaseBdev2", 00:11:37.191 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:37.191 "is_configured": true, 00:11:37.191 "data_offset": 2048, 00:11:37.191 "data_size": 63488 00:11:37.191 } 00:11:37.191 ] 00:11:37.191 }' 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.191 16:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.451 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:37.451 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:37.451 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.451 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.451 [2024-11-28 16:24:29.204818] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:37.711 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.711 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:37.711 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:37.711 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.711 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.712 [2024-11-28 16:24:29.276383] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.712 "name": "raid_bdev1", 00:11:37.712 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:37.712 "strip_size_kb": 0, 00:11:37.712 "state": "online", 00:11:37.712 "raid_level": "raid1", 00:11:37.712 "superblock": true, 00:11:37.712 "num_base_bdevs": 2, 00:11:37.712 "num_base_bdevs_discovered": 1, 00:11:37.712 "num_base_bdevs_operational": 1, 00:11:37.712 "base_bdevs_list": [ 00:11:37.712 { 00:11:37.712 "name": null, 00:11:37.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.712 "is_configured": false, 00:11:37.712 "data_offset": 0, 00:11:37.712 "data_size": 63488 00:11:37.712 }, 00:11:37.712 { 00:11:37.712 "name": "BaseBdev2", 00:11:37.712 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:37.712 "is_configured": true, 00:11:37.712 "data_offset": 2048, 00:11:37.712 "data_size": 63488 00:11:37.712 } 00:11:37.712 ] 00:11:37.712 }' 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.712 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.712 [2024-11-28 16:24:29.374203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:37.712 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:37.712 Zero copy mechanism will not be used. 00:11:37.712 Running I/O for 60 seconds... 00:11:37.972 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:37.972 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.972 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.972 [2024-11-28 16:24:29.729650] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:38.231 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.231 16:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:38.231 [2024-11-28 16:24:29.780196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:38.231 [2024-11-28 16:24:29.782030] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:38.231 [2024-11-28 16:24:29.894453] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:38.231 [2024-11-28 16:24:29.894937] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:38.490 [2024-11-28 16:24:30.023368] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:38.490 [2024-11-28 16:24:30.023671] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:38.750 158.00 IOPS, 474.00 MiB/s [2024-11-28T16:24:30.521Z] [2024-11-28 16:24:30.379362] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:38.750 [2024-11-28 16:24:30.502972] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:39.011 16:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:39.011 16:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:39.011 16:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:39.011 16:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:39.011 16:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:39.011 16:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.011 16:24:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.011 16:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.011 16:24:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.271 16:24:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.271 [2024-11-28 16:24:30.816338] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:39.271 [2024-11-28 16:24:30.816804] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:39.271 16:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:39.271 "name": "raid_bdev1", 00:11:39.271 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:39.271 "strip_size_kb": 0, 00:11:39.271 "state": "online", 00:11:39.271 "raid_level": "raid1", 00:11:39.271 "superblock": true, 00:11:39.271 "num_base_bdevs": 2, 00:11:39.271 "num_base_bdevs_discovered": 2, 00:11:39.271 "num_base_bdevs_operational": 2, 00:11:39.271 "process": { 00:11:39.271 "type": "rebuild", 00:11:39.271 "target": "spare", 00:11:39.271 "progress": { 00:11:39.271 "blocks": 12288, 00:11:39.271 "percent": 19 00:11:39.271 } 00:11:39.271 }, 00:11:39.271 "base_bdevs_list": [ 00:11:39.271 { 00:11:39.271 "name": "spare", 00:11:39.271 "uuid": "8c2b006f-14b3-549c-93fb-749cdb25a072", 00:11:39.271 "is_configured": true, 00:11:39.271 "data_offset": 2048, 00:11:39.271 "data_size": 63488 00:11:39.271 }, 00:11:39.271 { 00:11:39.271 "name": "BaseBdev2", 00:11:39.271 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:39.271 "is_configured": true, 00:11:39.271 "data_offset": 2048, 00:11:39.271 "data_size": 63488 00:11:39.271 } 00:11:39.271 ] 00:11:39.271 }' 00:11:39.271 16:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:39.271 16:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:39.271 16:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:39.271 16:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:39.271 16:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:39.271 16:24:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.271 16:24:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.271 [2024-11-28 16:24:30.928062] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:39.271 [2024-11-28 16:24:31.024391] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:39.271 [2024-11-28 16:24:31.024757] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:39.531 [2024-11-28 16:24:31.126369] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:39.531 [2024-11-28 16:24:31.128951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.531 [2024-11-28 16:24:31.129044] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:39.531 [2024-11-28 16:24:31.129072] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:39.531 [2024-11-28 16:24:31.145877] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:11:39.531 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.531 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:39.531 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.531 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.531 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.531 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.532 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:39.532 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.532 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.532 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.532 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.532 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.532 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.532 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.532 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.532 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.532 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.532 "name": "raid_bdev1", 00:11:39.532 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:39.532 "strip_size_kb": 0, 00:11:39.532 "state": "online", 00:11:39.532 "raid_level": "raid1", 00:11:39.532 "superblock": true, 00:11:39.532 "num_base_bdevs": 2, 00:11:39.532 "num_base_bdevs_discovered": 1, 00:11:39.532 "num_base_bdevs_operational": 1, 00:11:39.532 "base_bdevs_list": [ 00:11:39.532 { 00:11:39.532 "name": null, 00:11:39.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.532 "is_configured": false, 00:11:39.532 "data_offset": 0, 00:11:39.532 "data_size": 63488 00:11:39.532 }, 00:11:39.532 { 00:11:39.532 "name": "BaseBdev2", 00:11:39.532 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:39.532 "is_configured": true, 00:11:39.532 "data_offset": 2048, 00:11:39.532 "data_size": 63488 00:11:39.532 } 00:11:39.532 ] 00:11:39.532 }' 00:11:39.532 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.532 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.051 140.50 IOPS, 421.50 MiB/s [2024-11-28T16:24:31.822Z] 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:40.051 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:40.051 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:40.051 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:40.051 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:40.051 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.051 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.051 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.051 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.051 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.051 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:40.051 "name": "raid_bdev1", 00:11:40.052 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:40.052 "strip_size_kb": 0, 00:11:40.052 "state": "online", 00:11:40.052 "raid_level": "raid1", 00:11:40.052 "superblock": true, 00:11:40.052 "num_base_bdevs": 2, 00:11:40.052 "num_base_bdevs_discovered": 1, 00:11:40.052 "num_base_bdevs_operational": 1, 00:11:40.052 "base_bdevs_list": [ 00:11:40.052 { 00:11:40.052 "name": null, 00:11:40.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.052 "is_configured": false, 00:11:40.052 "data_offset": 0, 00:11:40.052 "data_size": 63488 00:11:40.052 }, 00:11:40.052 { 00:11:40.052 "name": "BaseBdev2", 00:11:40.052 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:40.052 "is_configured": true, 00:11:40.052 "data_offset": 2048, 00:11:40.052 "data_size": 63488 00:11:40.052 } 00:11:40.052 ] 00:11:40.052 }' 00:11:40.052 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:40.052 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:40.052 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:40.052 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:40.052 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:40.052 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.052 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.052 [2024-11-28 16:24:31.746529] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:40.052 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.052 16:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:40.052 [2024-11-28 16:24:31.781893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:40.052 [2024-11-28 16:24:31.783641] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:40.312 [2024-11-28 16:24:31.884447] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:40.312 [2024-11-28 16:24:31.884779] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:40.312 [2024-11-28 16:24:32.004235] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:40.312 [2024-11-28 16:24:32.004456] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:40.572 [2024-11-28 16:24:32.330209] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:40.831 157.00 IOPS, 471.00 MiB/s [2024-11-28T16:24:32.602Z] [2024-11-28 16:24:32.448640] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:40.831 [2024-11-28 16:24:32.448909] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:41.092 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:41.092 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:41.092 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:41.092 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:41.092 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:41.092 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.092 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.092 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.092 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:41.092 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.092 [2024-11-28 16:24:32.797577] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:41.092 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:41.092 "name": "raid_bdev1", 00:11:41.092 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:41.092 "strip_size_kb": 0, 00:11:41.092 "state": "online", 00:11:41.092 "raid_level": "raid1", 00:11:41.092 "superblock": true, 00:11:41.092 "num_base_bdevs": 2, 00:11:41.092 "num_base_bdevs_discovered": 2, 00:11:41.092 "num_base_bdevs_operational": 2, 00:11:41.092 "process": { 00:11:41.092 "type": "rebuild", 00:11:41.092 "target": "spare", 00:11:41.092 "progress": { 00:11:41.092 "blocks": 12288, 00:11:41.092 "percent": 19 00:11:41.092 } 00:11:41.092 }, 00:11:41.092 "base_bdevs_list": [ 00:11:41.092 { 00:11:41.092 "name": "spare", 00:11:41.092 "uuid": "8c2b006f-14b3-549c-93fb-749cdb25a072", 00:11:41.092 "is_configured": true, 00:11:41.092 "data_offset": 2048, 00:11:41.092 "data_size": 63488 00:11:41.092 }, 00:11:41.092 { 00:11:41.092 "name": "BaseBdev2", 00:11:41.092 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:41.092 "is_configured": true, 00:11:41.092 "data_offset": 2048, 00:11:41.092 "data_size": 63488 00:11:41.092 } 00:11:41.092 ] 00:11:41.092 }' 00:11:41.092 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:41.352 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:41.352 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:41.352 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:41.352 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:41.352 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:41.352 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:41.352 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:41.352 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:41.352 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:41.352 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=331 00:11:41.352 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:41.352 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:41.352 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:41.352 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:41.352 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:41.352 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:41.352 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.352 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.352 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:41.352 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.352 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.352 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:41.352 "name": "raid_bdev1", 00:11:41.352 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:41.352 "strip_size_kb": 0, 00:11:41.352 "state": "online", 00:11:41.352 "raid_level": "raid1", 00:11:41.353 "superblock": true, 00:11:41.353 "num_base_bdevs": 2, 00:11:41.353 "num_base_bdevs_discovered": 2, 00:11:41.353 "num_base_bdevs_operational": 2, 00:11:41.353 "process": { 00:11:41.353 "type": "rebuild", 00:11:41.353 "target": "spare", 00:11:41.353 "progress": { 00:11:41.353 "blocks": 14336, 00:11:41.353 "percent": 22 00:11:41.353 } 00:11:41.353 }, 00:11:41.353 "base_bdevs_list": [ 00:11:41.353 { 00:11:41.353 "name": "spare", 00:11:41.353 "uuid": "8c2b006f-14b3-549c-93fb-749cdb25a072", 00:11:41.353 "is_configured": true, 00:11:41.353 "data_offset": 2048, 00:11:41.353 "data_size": 63488 00:11:41.353 }, 00:11:41.353 { 00:11:41.353 "name": "BaseBdev2", 00:11:41.353 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:41.353 "is_configured": true, 00:11:41.353 "data_offset": 2048, 00:11:41.353 "data_size": 63488 00:11:41.353 } 00:11:41.353 ] 00:11:41.353 }' 00:11:41.353 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:41.353 16:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:41.353 16:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:41.353 16:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:41.353 16:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:41.353 [2024-11-28 16:24:33.027946] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:41.613 [2024-11-28 16:24:33.350053] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:41.873 136.25 IOPS, 408.75 MiB/s [2024-11-28T16:24:33.644Z] [2024-11-28 16:24:33.569777] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:42.445 16:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:42.445 16:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:42.445 16:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:42.445 16:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:42.445 16:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:42.445 16:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:42.445 16:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.445 16:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.445 16:24:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.445 16:24:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.445 16:24:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.445 16:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:42.445 "name": "raid_bdev1", 00:11:42.445 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:42.445 "strip_size_kb": 0, 00:11:42.445 "state": "online", 00:11:42.445 "raid_level": "raid1", 00:11:42.445 "superblock": true, 00:11:42.445 "num_base_bdevs": 2, 00:11:42.445 "num_base_bdevs_discovered": 2, 00:11:42.445 "num_base_bdevs_operational": 2, 00:11:42.445 "process": { 00:11:42.445 "type": "rebuild", 00:11:42.445 "target": "spare", 00:11:42.445 "progress": { 00:11:42.445 "blocks": 30720, 00:11:42.445 "percent": 48 00:11:42.445 } 00:11:42.445 }, 00:11:42.445 "base_bdevs_list": [ 00:11:42.445 { 00:11:42.445 "name": "spare", 00:11:42.445 "uuid": "8c2b006f-14b3-549c-93fb-749cdb25a072", 00:11:42.445 "is_configured": true, 00:11:42.445 "data_offset": 2048, 00:11:42.445 "data_size": 63488 00:11:42.445 }, 00:11:42.445 { 00:11:42.445 "name": "BaseBdev2", 00:11:42.445 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:42.445 "is_configured": true, 00:11:42.445 "data_offset": 2048, 00:11:42.445 "data_size": 63488 00:11:42.445 } 00:11:42.445 ] 00:11:42.445 }' 00:11:42.445 16:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:42.445 [2024-11-28 16:24:34.087661] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:11:42.445 16:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:42.445 16:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:42.445 16:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:42.445 16:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:42.445 [2024-11-28 16:24:34.189745] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:42.445 [2024-11-28 16:24:34.190052] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:43.340 120.20 IOPS, 360.60 MiB/s [2024-11-28T16:24:35.111Z] [2024-11-28 16:24:34.864127] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:11:43.340 [2024-11-28 16:24:34.970656] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:43.599 16:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:43.599 16:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:43.599 16:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:43.599 16:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:43.599 16:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:43.599 16:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:43.599 16:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.599 16:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.599 16:24:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.599 16:24:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:43.599 16:24:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.599 16:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:43.599 "name": "raid_bdev1", 00:11:43.599 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:43.600 "strip_size_kb": 0, 00:11:43.600 "state": "online", 00:11:43.600 "raid_level": "raid1", 00:11:43.600 "superblock": true, 00:11:43.600 "num_base_bdevs": 2, 00:11:43.600 "num_base_bdevs_discovered": 2, 00:11:43.600 "num_base_bdevs_operational": 2, 00:11:43.600 "process": { 00:11:43.600 "type": "rebuild", 00:11:43.600 "target": "spare", 00:11:43.600 "progress": { 00:11:43.600 "blocks": 47104, 00:11:43.600 "percent": 74 00:11:43.600 } 00:11:43.600 }, 00:11:43.600 "base_bdevs_list": [ 00:11:43.600 { 00:11:43.600 "name": "spare", 00:11:43.600 "uuid": "8c2b006f-14b3-549c-93fb-749cdb25a072", 00:11:43.600 "is_configured": true, 00:11:43.600 "data_offset": 2048, 00:11:43.600 "data_size": 63488 00:11:43.600 }, 00:11:43.600 { 00:11:43.600 "name": "BaseBdev2", 00:11:43.600 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:43.600 "is_configured": true, 00:11:43.600 "data_offset": 2048, 00:11:43.600 "data_size": 63488 00:11:43.600 } 00:11:43.600 ] 00:11:43.600 }' 00:11:43.600 16:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:43.600 16:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:43.600 16:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:43.600 16:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:43.600 16:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:44.118 107.33 IOPS, 322.00 MiB/s [2024-11-28T16:24:35.889Z] [2024-11-28 16:24:35.750679] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:11:44.377 [2024-11-28 16:24:36.069500] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:44.636 [2024-11-28 16:24:36.169358] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:44.636 [2024-11-28 16:24:36.171102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.636 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:44.636 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:44.636 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:44.636 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:44.636 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:44.636 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:44.636 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.636 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.636 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.636 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.636 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.636 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:44.636 "name": "raid_bdev1", 00:11:44.636 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:44.636 "strip_size_kb": 0, 00:11:44.636 "state": "online", 00:11:44.636 "raid_level": "raid1", 00:11:44.636 "superblock": true, 00:11:44.636 "num_base_bdevs": 2, 00:11:44.636 "num_base_bdevs_discovered": 2, 00:11:44.636 "num_base_bdevs_operational": 2, 00:11:44.636 "base_bdevs_list": [ 00:11:44.636 { 00:11:44.636 "name": "spare", 00:11:44.636 "uuid": "8c2b006f-14b3-549c-93fb-749cdb25a072", 00:11:44.636 "is_configured": true, 00:11:44.636 "data_offset": 2048, 00:11:44.636 "data_size": 63488 00:11:44.636 }, 00:11:44.636 { 00:11:44.636 "name": "BaseBdev2", 00:11:44.636 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:44.636 "is_configured": true, 00:11:44.636 "data_offset": 2048, 00:11:44.636 "data_size": 63488 00:11:44.636 } 00:11:44.636 ] 00:11:44.636 }' 00:11:44.636 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:44.895 96.29 IOPS, 288.86 MiB/s [2024-11-28T16:24:36.666Z] 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:44.895 "name": "raid_bdev1", 00:11:44.895 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:44.895 "strip_size_kb": 0, 00:11:44.895 "state": "online", 00:11:44.895 "raid_level": "raid1", 00:11:44.895 "superblock": true, 00:11:44.895 "num_base_bdevs": 2, 00:11:44.895 "num_base_bdevs_discovered": 2, 00:11:44.895 "num_base_bdevs_operational": 2, 00:11:44.895 "base_bdevs_list": [ 00:11:44.895 { 00:11:44.895 "name": "spare", 00:11:44.895 "uuid": "8c2b006f-14b3-549c-93fb-749cdb25a072", 00:11:44.895 "is_configured": true, 00:11:44.895 "data_offset": 2048, 00:11:44.895 "data_size": 63488 00:11:44.895 }, 00:11:44.895 { 00:11:44.895 "name": "BaseBdev2", 00:11:44.895 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:44.895 "is_configured": true, 00:11:44.895 "data_offset": 2048, 00:11:44.895 "data_size": 63488 00:11:44.895 } 00:11:44.895 ] 00:11:44.895 }' 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.895 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.895 "name": "raid_bdev1", 00:11:44.895 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:44.896 "strip_size_kb": 0, 00:11:44.896 "state": "online", 00:11:44.896 "raid_level": "raid1", 00:11:44.896 "superblock": true, 00:11:44.896 "num_base_bdevs": 2, 00:11:44.896 "num_base_bdevs_discovered": 2, 00:11:44.896 "num_base_bdevs_operational": 2, 00:11:44.896 "base_bdevs_list": [ 00:11:44.896 { 00:11:44.896 "name": "spare", 00:11:44.896 "uuid": "8c2b006f-14b3-549c-93fb-749cdb25a072", 00:11:44.896 "is_configured": true, 00:11:44.896 "data_offset": 2048, 00:11:44.896 "data_size": 63488 00:11:44.896 }, 00:11:44.896 { 00:11:44.896 "name": "BaseBdev2", 00:11:44.896 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:44.896 "is_configured": true, 00:11:44.896 "data_offset": 2048, 00:11:44.896 "data_size": 63488 00:11:44.896 } 00:11:44.896 ] 00:11:44.896 }' 00:11:44.896 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.896 16:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.466 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:45.466 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.466 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.466 [2024-11-28 16:24:37.038076] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:45.466 [2024-11-28 16:24:37.038109] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:45.466 00:11:45.466 Latency(us) 00:11:45.466 [2024-11-28T16:24:37.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:45.466 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:45.466 raid_bdev1 : 7.69 90.39 271.18 0.00 0.00 13826.37 277.24 113099.68 00:11:45.466 [2024-11-28T16:24:37.237Z] =================================================================================================================== 00:11:45.466 [2024-11-28T16:24:37.237Z] Total : 90.39 271.18 0.00 0.00 13826.37 277.24 113099.68 00:11:45.466 [2024-11-28 16:24:37.053754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.466 [2024-11-28 16:24:37.053794] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:45.466 [2024-11-28 16:24:37.053907] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:45.466 [2024-11-28 16:24:37.053920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:45.466 { 00:11:45.466 "results": [ 00:11:45.466 { 00:11:45.466 "job": "raid_bdev1", 00:11:45.466 "core_mask": "0x1", 00:11:45.466 "workload": "randrw", 00:11:45.466 "percentage": 50, 00:11:45.466 "status": "finished", 00:11:45.466 "queue_depth": 2, 00:11:45.466 "io_size": 3145728, 00:11:45.466 "runtime": 7.688568, 00:11:45.466 "iops": 90.39394592074883, 00:11:45.466 "mibps": 271.1818377622465, 00:11:45.466 "io_failed": 0, 00:11:45.466 "io_timeout": 0, 00:11:45.466 "avg_latency_us": 13826.367061041123, 00:11:45.466 "min_latency_us": 277.2401746724891, 00:11:45.466 "max_latency_us": 113099.68209606987 00:11:45.466 } 00:11:45.466 ], 00:11:45.466 "core_count": 1 00:11:45.466 } 00:11:45.466 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.466 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.466 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.466 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.466 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:45.466 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.466 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:45.466 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:45.466 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:45.466 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:45.466 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:45.466 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:45.466 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:45.466 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:45.466 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:45.466 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:45.466 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:45.466 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:45.466 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:45.727 /dev/nbd0 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:45.727 1+0 records in 00:11:45.727 1+0 records out 00:11:45.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215772 s, 19.0 MB/s 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:45.727 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:45.987 /dev/nbd1 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:45.988 1+0 records in 00:11:45.988 1+0 records out 00:11:45.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250152 s, 16.4 MB/s 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:45.988 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:46.248 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:46.248 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:46.248 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:46.248 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.248 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.248 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:46.248 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:46.248 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.248 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:46.248 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:46.248 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:46.248 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:46.248 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:46.248 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:46.248 16:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.508 [2024-11-28 16:24:38.086062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:46.508 [2024-11-28 16:24:38.086119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.508 [2024-11-28 16:24:38.086142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:46.508 [2024-11-28 16:24:38.086151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.508 [2024-11-28 16:24:38.088297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.508 [2024-11-28 16:24:38.088336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:46.508 [2024-11-28 16:24:38.088438] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:46.508 [2024-11-28 16:24:38.088480] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:46.508 [2024-11-28 16:24:38.088601] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.508 spare 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.508 [2024-11-28 16:24:38.188503] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:11:46.508 [2024-11-28 16:24:38.188541] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:46.508 [2024-11-28 16:24:38.188799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:11:46.508 [2024-11-28 16:24:38.188967] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:11:46.508 [2024-11-28 16:24:38.188990] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:11:46.508 [2024-11-28 16:24:38.189131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.508 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.508 "name": "raid_bdev1", 00:11:46.508 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:46.508 "strip_size_kb": 0, 00:11:46.508 "state": "online", 00:11:46.508 "raid_level": "raid1", 00:11:46.508 "superblock": true, 00:11:46.508 "num_base_bdevs": 2, 00:11:46.508 "num_base_bdevs_discovered": 2, 00:11:46.508 "num_base_bdevs_operational": 2, 00:11:46.508 "base_bdevs_list": [ 00:11:46.509 { 00:11:46.509 "name": "spare", 00:11:46.509 "uuid": "8c2b006f-14b3-549c-93fb-749cdb25a072", 00:11:46.509 "is_configured": true, 00:11:46.509 "data_offset": 2048, 00:11:46.509 "data_size": 63488 00:11:46.509 }, 00:11:46.509 { 00:11:46.509 "name": "BaseBdev2", 00:11:46.509 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:46.509 "is_configured": true, 00:11:46.509 "data_offset": 2048, 00:11:46.509 "data_size": 63488 00:11:46.509 } 00:11:46.509 ] 00:11:46.509 }' 00:11:46.509 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.509 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.080 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:47.080 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:47.080 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:47.080 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:47.080 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:47.080 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.080 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.080 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.080 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.080 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.080 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:47.080 "name": "raid_bdev1", 00:11:47.080 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:47.080 "strip_size_kb": 0, 00:11:47.080 "state": "online", 00:11:47.080 "raid_level": "raid1", 00:11:47.080 "superblock": true, 00:11:47.080 "num_base_bdevs": 2, 00:11:47.080 "num_base_bdevs_discovered": 2, 00:11:47.080 "num_base_bdevs_operational": 2, 00:11:47.080 "base_bdevs_list": [ 00:11:47.080 { 00:11:47.080 "name": "spare", 00:11:47.080 "uuid": "8c2b006f-14b3-549c-93fb-749cdb25a072", 00:11:47.080 "is_configured": true, 00:11:47.080 "data_offset": 2048, 00:11:47.080 "data_size": 63488 00:11:47.080 }, 00:11:47.080 { 00:11:47.080 "name": "BaseBdev2", 00:11:47.080 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:47.080 "is_configured": true, 00:11:47.080 "data_offset": 2048, 00:11:47.080 "data_size": 63488 00:11:47.080 } 00:11:47.080 ] 00:11:47.080 }' 00:11:47.080 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:47.080 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:47.080 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:47.080 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:47.080 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.080 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.080 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.080 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:47.080 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.080 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:47.081 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:47.081 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.081 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.081 [2024-11-28 16:24:38.824934] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:47.081 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.081 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:47.081 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.081 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.081 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.081 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.081 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:47.081 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.081 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.081 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.081 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.081 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.081 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.081 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.081 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.341 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.341 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.341 "name": "raid_bdev1", 00:11:47.341 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:47.341 "strip_size_kb": 0, 00:11:47.341 "state": "online", 00:11:47.341 "raid_level": "raid1", 00:11:47.341 "superblock": true, 00:11:47.341 "num_base_bdevs": 2, 00:11:47.341 "num_base_bdevs_discovered": 1, 00:11:47.341 "num_base_bdevs_operational": 1, 00:11:47.341 "base_bdevs_list": [ 00:11:47.341 { 00:11:47.341 "name": null, 00:11:47.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.341 "is_configured": false, 00:11:47.341 "data_offset": 0, 00:11:47.341 "data_size": 63488 00:11:47.341 }, 00:11:47.341 { 00:11:47.341 "name": "BaseBdev2", 00:11:47.341 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:47.341 "is_configured": true, 00:11:47.341 "data_offset": 2048, 00:11:47.341 "data_size": 63488 00:11:47.341 } 00:11:47.341 ] 00:11:47.341 }' 00:11:47.341 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.341 16:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.601 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:47.601 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.601 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.601 [2024-11-28 16:24:39.292193] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:47.601 [2024-11-28 16:24:39.292382] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:47.601 [2024-11-28 16:24:39.292411] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:47.601 [2024-11-28 16:24:39.292451] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:47.601 [2024-11-28 16:24:39.296858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000 00:11:47.601 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.601 16:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:47.601 [2024-11-28 16:24:39.298663] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:48.540 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:48.540 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:48.540 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:48.540 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:48.540 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:48.540 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.540 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.540 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.540 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.800 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.800 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:48.800 "name": "raid_bdev1", 00:11:48.800 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:48.800 "strip_size_kb": 0, 00:11:48.800 "state": "online", 00:11:48.800 "raid_level": "raid1", 00:11:48.800 "superblock": true, 00:11:48.800 "num_base_bdevs": 2, 00:11:48.800 "num_base_bdevs_discovered": 2, 00:11:48.800 "num_base_bdevs_operational": 2, 00:11:48.800 "process": { 00:11:48.800 "type": "rebuild", 00:11:48.800 "target": "spare", 00:11:48.800 "progress": { 00:11:48.800 "blocks": 20480, 00:11:48.800 "percent": 32 00:11:48.800 } 00:11:48.800 }, 00:11:48.800 "base_bdevs_list": [ 00:11:48.800 { 00:11:48.800 "name": "spare", 00:11:48.800 "uuid": "8c2b006f-14b3-549c-93fb-749cdb25a072", 00:11:48.800 "is_configured": true, 00:11:48.800 "data_offset": 2048, 00:11:48.800 "data_size": 63488 00:11:48.800 }, 00:11:48.800 { 00:11:48.800 "name": "BaseBdev2", 00:11:48.800 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:48.800 "is_configured": true, 00:11:48.800 "data_offset": 2048, 00:11:48.800 "data_size": 63488 00:11:48.800 } 00:11:48.800 ] 00:11:48.800 }' 00:11:48.800 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:48.800 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:48.800 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:48.800 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:48.800 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:48.800 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.800 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.800 [2024-11-28 16:24:40.467644] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:48.800 [2024-11-28 16:24:40.503500] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:48.800 [2024-11-28 16:24:40.503576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.800 [2024-11-28 16:24:40.503592] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:48.800 [2024-11-28 16:24:40.503603] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:48.800 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.800 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:48.800 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.800 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.800 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.800 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.800 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:48.800 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.800 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.800 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.800 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.800 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.800 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.801 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.801 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.801 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.801 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.801 "name": "raid_bdev1", 00:11:48.801 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:48.801 "strip_size_kb": 0, 00:11:48.801 "state": "online", 00:11:48.801 "raid_level": "raid1", 00:11:48.801 "superblock": true, 00:11:48.801 "num_base_bdevs": 2, 00:11:48.801 "num_base_bdevs_discovered": 1, 00:11:48.801 "num_base_bdevs_operational": 1, 00:11:48.801 "base_bdevs_list": [ 00:11:48.801 { 00:11:48.801 "name": null, 00:11:48.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.801 "is_configured": false, 00:11:48.801 "data_offset": 0, 00:11:48.801 "data_size": 63488 00:11:48.801 }, 00:11:48.801 { 00:11:48.801 "name": "BaseBdev2", 00:11:48.801 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:48.801 "is_configured": true, 00:11:48.801 "data_offset": 2048, 00:11:48.801 "data_size": 63488 00:11:48.801 } 00:11:48.801 ] 00:11:48.801 }' 00:11:48.801 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.801 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:49.369 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:49.369 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.369 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:49.369 [2024-11-28 16:24:40.959881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:49.369 [2024-11-28 16:24:40.959972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.369 [2024-11-28 16:24:40.960000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:49.369 [2024-11-28 16:24:40.960014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.369 [2024-11-28 16:24:40.960505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.369 [2024-11-28 16:24:40.960539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:49.369 [2024-11-28 16:24:40.960640] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:49.369 [2024-11-28 16:24:40.960665] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:49.369 [2024-11-28 16:24:40.960678] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:49.369 [2024-11-28 16:24:40.960729] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:49.369 [2024-11-28 16:24:40.965518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:11:49.369 spare 00:11:49.369 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.369 16:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:49.369 [2024-11-28 16:24:40.967761] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:50.308 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:50.308 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:50.308 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:50.308 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:50.308 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:50.308 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.308 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.308 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.308 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.308 16:24:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.308 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:50.308 "name": "raid_bdev1", 00:11:50.308 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:50.308 "strip_size_kb": 0, 00:11:50.308 "state": "online", 00:11:50.308 "raid_level": "raid1", 00:11:50.308 "superblock": true, 00:11:50.308 "num_base_bdevs": 2, 00:11:50.308 "num_base_bdevs_discovered": 2, 00:11:50.308 "num_base_bdevs_operational": 2, 00:11:50.308 "process": { 00:11:50.308 "type": "rebuild", 00:11:50.308 "target": "spare", 00:11:50.308 "progress": { 00:11:50.308 "blocks": 20480, 00:11:50.308 "percent": 32 00:11:50.308 } 00:11:50.308 }, 00:11:50.308 "base_bdevs_list": [ 00:11:50.308 { 00:11:50.308 "name": "spare", 00:11:50.308 "uuid": "8c2b006f-14b3-549c-93fb-749cdb25a072", 00:11:50.308 "is_configured": true, 00:11:50.308 "data_offset": 2048, 00:11:50.308 "data_size": 63488 00:11:50.308 }, 00:11:50.308 { 00:11:50.308 "name": "BaseBdev2", 00:11:50.308 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:50.308 "is_configured": true, 00:11:50.308 "data_offset": 2048, 00:11:50.308 "data_size": 63488 00:11:50.308 } 00:11:50.308 ] 00:11:50.308 }' 00:11:50.308 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:50.568 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:50.568 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:50.568 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:50.568 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:50.568 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.568 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.568 [2024-11-28 16:24:42.143970] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:50.568 [2024-11-28 16:24:42.172355] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:50.568 [2024-11-28 16:24:42.172425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.568 [2024-11-28 16:24:42.172446] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:50.568 [2024-11-28 16:24:42.172454] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:50.568 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.568 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:50.568 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.568 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.568 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.568 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.568 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:50.568 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.568 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.568 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.568 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.568 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.568 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.568 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.569 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.569 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.569 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.569 "name": "raid_bdev1", 00:11:50.569 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:50.569 "strip_size_kb": 0, 00:11:50.569 "state": "online", 00:11:50.569 "raid_level": "raid1", 00:11:50.569 "superblock": true, 00:11:50.569 "num_base_bdevs": 2, 00:11:50.569 "num_base_bdevs_discovered": 1, 00:11:50.569 "num_base_bdevs_operational": 1, 00:11:50.569 "base_bdevs_list": [ 00:11:50.569 { 00:11:50.569 "name": null, 00:11:50.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.569 "is_configured": false, 00:11:50.569 "data_offset": 0, 00:11:50.569 "data_size": 63488 00:11:50.569 }, 00:11:50.569 { 00:11:50.569 "name": "BaseBdev2", 00:11:50.569 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:50.569 "is_configured": true, 00:11:50.569 "data_offset": 2048, 00:11:50.569 "data_size": 63488 00:11:50.569 } 00:11:50.569 ] 00:11:50.569 }' 00:11:50.569 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.569 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.137 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:51.137 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:51.137 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:51.137 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:51.137 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:51.137 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.137 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.137 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.137 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.137 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.137 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:51.137 "name": "raid_bdev1", 00:11:51.137 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:51.137 "strip_size_kb": 0, 00:11:51.137 "state": "online", 00:11:51.137 "raid_level": "raid1", 00:11:51.137 "superblock": true, 00:11:51.137 "num_base_bdevs": 2, 00:11:51.137 "num_base_bdevs_discovered": 1, 00:11:51.137 "num_base_bdevs_operational": 1, 00:11:51.137 "base_bdevs_list": [ 00:11:51.137 { 00:11:51.137 "name": null, 00:11:51.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.137 "is_configured": false, 00:11:51.137 "data_offset": 0, 00:11:51.137 "data_size": 63488 00:11:51.137 }, 00:11:51.137 { 00:11:51.137 "name": "BaseBdev2", 00:11:51.137 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:51.137 "is_configured": true, 00:11:51.138 "data_offset": 2048, 00:11:51.138 "data_size": 63488 00:11:51.138 } 00:11:51.138 ] 00:11:51.138 }' 00:11:51.138 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:51.138 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:51.138 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:51.138 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:51.138 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:51.138 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.138 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.138 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.138 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:51.138 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.138 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.138 [2024-11-28 16:24:42.796192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:51.138 [2024-11-28 16:24:42.796252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.138 [2024-11-28 16:24:42.796275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:51.138 [2024-11-28 16:24:42.796284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.138 [2024-11-28 16:24:42.796673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.138 [2024-11-28 16:24:42.796699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:51.138 [2024-11-28 16:24:42.796773] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:51.138 [2024-11-28 16:24:42.796789] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:51.138 [2024-11-28 16:24:42.796799] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:51.138 [2024-11-28 16:24:42.796810] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:51.138 BaseBdev1 00:11:51.138 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.138 16:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:52.078 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:52.078 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.078 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.078 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.078 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.078 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:52.078 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.078 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.078 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.078 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.078 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.078 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.078 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.078 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.078 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.339 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.339 "name": "raid_bdev1", 00:11:52.339 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:52.339 "strip_size_kb": 0, 00:11:52.339 "state": "online", 00:11:52.339 "raid_level": "raid1", 00:11:52.339 "superblock": true, 00:11:52.339 "num_base_bdevs": 2, 00:11:52.339 "num_base_bdevs_discovered": 1, 00:11:52.339 "num_base_bdevs_operational": 1, 00:11:52.339 "base_bdevs_list": [ 00:11:52.339 { 00:11:52.339 "name": null, 00:11:52.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.339 "is_configured": false, 00:11:52.339 "data_offset": 0, 00:11:52.339 "data_size": 63488 00:11:52.339 }, 00:11:52.339 { 00:11:52.339 "name": "BaseBdev2", 00:11:52.339 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:52.339 "is_configured": true, 00:11:52.339 "data_offset": 2048, 00:11:52.339 "data_size": 63488 00:11:52.339 } 00:11:52.339 ] 00:11:52.339 }' 00:11:52.339 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.339 16:24:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.599 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:52.599 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:52.599 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:52.599 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:52.599 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:52.599 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.599 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.599 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.599 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.599 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.599 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:52.599 "name": "raid_bdev1", 00:11:52.599 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:52.599 "strip_size_kb": 0, 00:11:52.599 "state": "online", 00:11:52.599 "raid_level": "raid1", 00:11:52.599 "superblock": true, 00:11:52.599 "num_base_bdevs": 2, 00:11:52.599 "num_base_bdevs_discovered": 1, 00:11:52.599 "num_base_bdevs_operational": 1, 00:11:52.599 "base_bdevs_list": [ 00:11:52.599 { 00:11:52.599 "name": null, 00:11:52.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.599 "is_configured": false, 00:11:52.599 "data_offset": 0, 00:11:52.599 "data_size": 63488 00:11:52.599 }, 00:11:52.599 { 00:11:52.599 "name": "BaseBdev2", 00:11:52.599 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:52.599 "is_configured": true, 00:11:52.599 "data_offset": 2048, 00:11:52.599 "data_size": 63488 00:11:52.599 } 00:11:52.599 ] 00:11:52.599 }' 00:11:52.599 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:52.859 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:52.859 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.859 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:52.859 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:52.859 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:11:52.859 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:52.859 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:52.859 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:52.859 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:52.859 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:52.859 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:52.859 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.859 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.859 [2024-11-28 16:24:44.442536] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.859 [2024-11-28 16:24:44.442703] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:52.859 [2024-11-28 16:24:44.442727] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:52.859 request: 00:11:52.859 { 00:11:52.859 "base_bdev": "BaseBdev1", 00:11:52.859 "raid_bdev": "raid_bdev1", 00:11:52.859 "method": "bdev_raid_add_base_bdev", 00:11:52.859 "req_id": 1 00:11:52.859 } 00:11:52.859 Got JSON-RPC error response 00:11:52.859 response: 00:11:52.859 { 00:11:52.859 "code": -22, 00:11:52.859 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:52.859 } 00:11:52.859 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:52.859 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:11:52.859 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:52.859 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:52.859 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:52.859 16:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:53.805 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:53.805 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.805 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.805 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.805 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.805 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:53.805 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.805 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.805 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.805 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.805 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.805 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.805 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.805 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.805 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.805 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.805 "name": "raid_bdev1", 00:11:53.805 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:53.805 "strip_size_kb": 0, 00:11:53.805 "state": "online", 00:11:53.805 "raid_level": "raid1", 00:11:53.805 "superblock": true, 00:11:53.805 "num_base_bdevs": 2, 00:11:53.805 "num_base_bdevs_discovered": 1, 00:11:53.805 "num_base_bdevs_operational": 1, 00:11:53.805 "base_bdevs_list": [ 00:11:53.805 { 00:11:53.805 "name": null, 00:11:53.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.805 "is_configured": false, 00:11:53.805 "data_offset": 0, 00:11:53.805 "data_size": 63488 00:11:53.805 }, 00:11:53.805 { 00:11:53.805 "name": "BaseBdev2", 00:11:53.805 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:53.805 "is_configured": true, 00:11:53.805 "data_offset": 2048, 00:11:53.805 "data_size": 63488 00:11:53.805 } 00:11:53.805 ] 00:11:53.805 }' 00:11:53.805 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.805 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.373 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:54.373 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:54.373 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:54.373 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:54.373 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:54.373 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.373 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.373 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.373 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.373 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.373 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:54.373 "name": "raid_bdev1", 00:11:54.373 "uuid": "760181a2-23c0-4eca-abff-4884955847e7", 00:11:54.373 "strip_size_kb": 0, 00:11:54.373 "state": "online", 00:11:54.373 "raid_level": "raid1", 00:11:54.373 "superblock": true, 00:11:54.373 "num_base_bdevs": 2, 00:11:54.373 "num_base_bdevs_discovered": 1, 00:11:54.373 "num_base_bdevs_operational": 1, 00:11:54.373 "base_bdevs_list": [ 00:11:54.373 { 00:11:54.373 "name": null, 00:11:54.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.373 "is_configured": false, 00:11:54.373 "data_offset": 0, 00:11:54.373 "data_size": 63488 00:11:54.373 }, 00:11:54.373 { 00:11:54.373 "name": "BaseBdev2", 00:11:54.373 "uuid": "9099d3cb-3244-5266-a995-3c532d89b702", 00:11:54.373 "is_configured": true, 00:11:54.373 "data_offset": 2048, 00:11:54.373 "data_size": 63488 00:11:54.373 } 00:11:54.373 ] 00:11:54.373 }' 00:11:54.373 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:54.373 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:54.373 16:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:54.373 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:54.373 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 87514 00:11:54.373 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 87514 ']' 00:11:54.373 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 87514 00:11:54.373 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:11:54.373 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:54.373 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87514 00:11:54.373 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:54.373 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:54.373 killing process with pid 87514 00:11:54.374 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87514' 00:11:54.374 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 87514 00:11:54.374 Received shutdown signal, test time was about 16.720373 seconds 00:11:54.374 00:11:54.374 Latency(us) 00:11:54.374 [2024-11-28T16:24:46.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:54.374 [2024-11-28T16:24:46.145Z] =================================================================================================================== 00:11:54.374 [2024-11-28T16:24:46.145Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:54.374 [2024-11-28 16:24:46.064390] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:54.374 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 87514 00:11:54.374 [2024-11-28 16:24:46.064566] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.374 [2024-11-28 16:24:46.064634] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:54.374 [2024-11-28 16:24:46.064652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:11:54.374 [2024-11-28 16:24:46.093213] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:54.633 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:54.633 00:11:54.633 real 0m18.585s 00:11:54.633 user 0m24.775s 00:11:54.633 sys 0m2.111s 00:11:54.633 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:54.633 16:24:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.633 ************************************ 00:11:54.633 END TEST raid_rebuild_test_sb_io 00:11:54.633 ************************************ 00:11:54.892 16:24:46 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:54.892 16:24:46 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:11:54.892 16:24:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:54.892 16:24:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:54.892 16:24:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:54.892 ************************************ 00:11:54.892 START TEST raid_rebuild_test 00:11:54.892 ************************************ 00:11:54.892 16:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:11:54.892 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:54.892 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:54.892 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:54.892 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:54.892 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:54.892 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:54.892 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:54.892 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:54.892 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:54.892 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:54.892 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:54.892 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:54.892 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:54.892 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=88191 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 88191 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 88191 ']' 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:54.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:54.893 16:24:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.893 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:54.893 Zero copy mechanism will not be used. 00:11:54.893 [2024-11-28 16:24:46.521487] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:54.893 [2024-11-28 16:24:46.521649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88191 ] 00:11:55.152 [2024-11-28 16:24:46.687660] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.152 [2024-11-28 16:24:46.735886] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.152 [2024-11-28 16:24:46.781562] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.152 [2024-11-28 16:24:46.781605] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.722 BaseBdev1_malloc 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.722 [2024-11-28 16:24:47.405739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:55.722 [2024-11-28 16:24:47.405821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.722 [2024-11-28 16:24:47.405867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:55.722 [2024-11-28 16:24:47.405889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.722 [2024-11-28 16:24:47.408439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.722 [2024-11-28 16:24:47.408484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:55.722 BaseBdev1 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.722 BaseBdev2_malloc 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.722 [2024-11-28 16:24:47.445263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:55.722 [2024-11-28 16:24:47.445322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.722 [2024-11-28 16:24:47.445345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:55.722 [2024-11-28 16:24:47.445355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.722 [2024-11-28 16:24:47.447793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.722 [2024-11-28 16:24:47.447855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:55.722 BaseBdev2 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.722 BaseBdev3_malloc 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.722 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.723 [2024-11-28 16:24:47.474651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:55.723 [2024-11-28 16:24:47.474757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.723 [2024-11-28 16:24:47.474790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:55.723 [2024-11-28 16:24:47.474800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.723 [2024-11-28 16:24:47.477236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.723 [2024-11-28 16:24:47.477279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:55.723 BaseBdev3 00:11:55.723 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.723 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:55.723 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:55.723 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.723 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.983 BaseBdev4_malloc 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.983 [2024-11-28 16:24:47.503958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:11:55.983 [2024-11-28 16:24:47.504022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.983 [2024-11-28 16:24:47.504054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:55.983 [2024-11-28 16:24:47.504065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.983 [2024-11-28 16:24:47.506461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.983 [2024-11-28 16:24:47.506501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:55.983 BaseBdev4 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.983 spare_malloc 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.983 spare_delay 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.983 [2024-11-28 16:24:47.545179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:55.983 [2024-11-28 16:24:47.545290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.983 [2024-11-28 16:24:47.545325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:55.983 [2024-11-28 16:24:47.545339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.983 [2024-11-28 16:24:47.547788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.983 [2024-11-28 16:24:47.547841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:55.983 spare 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.983 [2024-11-28 16:24:47.561258] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.983 [2024-11-28 16:24:47.563399] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.983 [2024-11-28 16:24:47.563486] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:55.983 [2024-11-28 16:24:47.563539] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:55.983 [2024-11-28 16:24:47.563639] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:55.983 [2024-11-28 16:24:47.563660] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:55.983 [2024-11-28 16:24:47.563985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:55.983 [2024-11-28 16:24:47.564171] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:55.983 [2024-11-28 16:24:47.564200] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:55.983 [2024-11-28 16:24:47.564363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.983 "name": "raid_bdev1", 00:11:55.983 "uuid": "d0a97eaa-fc08-4b68-b786-f5322cb6c25b", 00:11:55.983 "strip_size_kb": 0, 00:11:55.983 "state": "online", 00:11:55.983 "raid_level": "raid1", 00:11:55.983 "superblock": false, 00:11:55.983 "num_base_bdevs": 4, 00:11:55.983 "num_base_bdevs_discovered": 4, 00:11:55.983 "num_base_bdevs_operational": 4, 00:11:55.983 "base_bdevs_list": [ 00:11:55.983 { 00:11:55.983 "name": "BaseBdev1", 00:11:55.983 "uuid": "208d92cd-935d-523b-91dd-498ab5bc2d72", 00:11:55.983 "is_configured": true, 00:11:55.983 "data_offset": 0, 00:11:55.983 "data_size": 65536 00:11:55.983 }, 00:11:55.983 { 00:11:55.983 "name": "BaseBdev2", 00:11:55.983 "uuid": "6c3e56c3-8eb9-5dbe-b015-52d692d2c7d1", 00:11:55.983 "is_configured": true, 00:11:55.983 "data_offset": 0, 00:11:55.983 "data_size": 65536 00:11:55.983 }, 00:11:55.983 { 00:11:55.983 "name": "BaseBdev3", 00:11:55.983 "uuid": "650377d8-db52-573c-90d1-ac4488dbe3f7", 00:11:55.983 "is_configured": true, 00:11:55.983 "data_offset": 0, 00:11:55.983 "data_size": 65536 00:11:55.983 }, 00:11:55.983 { 00:11:55.983 "name": "BaseBdev4", 00:11:55.983 "uuid": "1b52b603-b8c2-5278-8a63-8616380c19c7", 00:11:55.983 "is_configured": true, 00:11:55.983 "data_offset": 0, 00:11:55.983 "data_size": 65536 00:11:55.983 } 00:11:55.983 ] 00:11:55.983 }' 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.983 16:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:56.571 [2024-11-28 16:24:48.060916] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:56.571 16:24:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:56.846 [2024-11-28 16:24:48.348103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:56.846 /dev/nbd0 00:11:56.846 16:24:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:56.846 16:24:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:56.846 16:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:56.846 16:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:56.846 16:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:56.846 16:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:56.846 16:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:56.846 16:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:56.846 16:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:56.846 16:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:56.846 16:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.846 1+0 records in 00:11:56.846 1+0 records out 00:11:56.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406578 s, 10.1 MB/s 00:11:56.846 16:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.846 16:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:56.846 16:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.846 16:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:56.846 16:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:56.846 16:24:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:56.846 16:24:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:56.846 16:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:56.846 16:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:56.846 16:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:03.422 65536+0 records in 00:12:03.422 65536+0 records out 00:12:03.422 33554432 bytes (34 MB, 32 MiB) copied, 5.93025 s, 5.7 MB/s 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:03.422 [2024-11-28 16:24:54.540481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.422 [2024-11-28 16:24:54.578861] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.422 "name": "raid_bdev1", 00:12:03.422 "uuid": "d0a97eaa-fc08-4b68-b786-f5322cb6c25b", 00:12:03.422 "strip_size_kb": 0, 00:12:03.422 "state": "online", 00:12:03.422 "raid_level": "raid1", 00:12:03.422 "superblock": false, 00:12:03.422 "num_base_bdevs": 4, 00:12:03.422 "num_base_bdevs_discovered": 3, 00:12:03.422 "num_base_bdevs_operational": 3, 00:12:03.422 "base_bdevs_list": [ 00:12:03.422 { 00:12:03.422 "name": null, 00:12:03.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.422 "is_configured": false, 00:12:03.422 "data_offset": 0, 00:12:03.422 "data_size": 65536 00:12:03.422 }, 00:12:03.422 { 00:12:03.422 "name": "BaseBdev2", 00:12:03.422 "uuid": "6c3e56c3-8eb9-5dbe-b015-52d692d2c7d1", 00:12:03.422 "is_configured": true, 00:12:03.422 "data_offset": 0, 00:12:03.422 "data_size": 65536 00:12:03.422 }, 00:12:03.422 { 00:12:03.422 "name": "BaseBdev3", 00:12:03.422 "uuid": "650377d8-db52-573c-90d1-ac4488dbe3f7", 00:12:03.422 "is_configured": true, 00:12:03.422 "data_offset": 0, 00:12:03.422 "data_size": 65536 00:12:03.422 }, 00:12:03.422 { 00:12:03.422 "name": "BaseBdev4", 00:12:03.422 "uuid": "1b52b603-b8c2-5278-8a63-8616380c19c7", 00:12:03.422 "is_configured": true, 00:12:03.422 "data_offset": 0, 00:12:03.422 "data_size": 65536 00:12:03.422 } 00:12:03.422 ] 00:12:03.422 }' 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.422 16:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.422 16:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:03.422 16:24:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.422 16:24:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.422 [2024-11-28 16:24:55.014119] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:03.422 [2024-11-28 16:24:55.017487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:03.422 [2024-11-28 16:24:55.019294] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:03.422 16:24:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.422 16:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:04.361 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:04.361 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.361 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:04.361 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:04.361 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.361 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.361 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.361 16:24:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.361 16:24:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.361 16:24:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.361 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.361 "name": "raid_bdev1", 00:12:04.361 "uuid": "d0a97eaa-fc08-4b68-b786-f5322cb6c25b", 00:12:04.361 "strip_size_kb": 0, 00:12:04.361 "state": "online", 00:12:04.361 "raid_level": "raid1", 00:12:04.361 "superblock": false, 00:12:04.361 "num_base_bdevs": 4, 00:12:04.361 "num_base_bdevs_discovered": 4, 00:12:04.361 "num_base_bdevs_operational": 4, 00:12:04.361 "process": { 00:12:04.361 "type": "rebuild", 00:12:04.361 "target": "spare", 00:12:04.361 "progress": { 00:12:04.361 "blocks": 20480, 00:12:04.361 "percent": 31 00:12:04.361 } 00:12:04.361 }, 00:12:04.361 "base_bdevs_list": [ 00:12:04.361 { 00:12:04.361 "name": "spare", 00:12:04.361 "uuid": "9a6998b2-7cbb-58a6-b1b5-b1a85c704222", 00:12:04.361 "is_configured": true, 00:12:04.361 "data_offset": 0, 00:12:04.361 "data_size": 65536 00:12:04.361 }, 00:12:04.361 { 00:12:04.361 "name": "BaseBdev2", 00:12:04.361 "uuid": "6c3e56c3-8eb9-5dbe-b015-52d692d2c7d1", 00:12:04.361 "is_configured": true, 00:12:04.361 "data_offset": 0, 00:12:04.361 "data_size": 65536 00:12:04.361 }, 00:12:04.361 { 00:12:04.361 "name": "BaseBdev3", 00:12:04.361 "uuid": "650377d8-db52-573c-90d1-ac4488dbe3f7", 00:12:04.361 "is_configured": true, 00:12:04.361 "data_offset": 0, 00:12:04.361 "data_size": 65536 00:12:04.361 }, 00:12:04.361 { 00:12:04.361 "name": "BaseBdev4", 00:12:04.361 "uuid": "1b52b603-b8c2-5278-8a63-8616380c19c7", 00:12:04.361 "is_configured": true, 00:12:04.361 "data_offset": 0, 00:12:04.361 "data_size": 65536 00:12:04.361 } 00:12:04.361 ] 00:12:04.361 }' 00:12:04.361 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.624 [2024-11-28 16:24:56.169857] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:04.624 [2024-11-28 16:24:56.223879] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:04.624 [2024-11-28 16:24:56.223947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.624 [2024-11-28 16:24:56.223964] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:04.624 [2024-11-28 16:24:56.223971] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.624 "name": "raid_bdev1", 00:12:04.624 "uuid": "d0a97eaa-fc08-4b68-b786-f5322cb6c25b", 00:12:04.624 "strip_size_kb": 0, 00:12:04.624 "state": "online", 00:12:04.624 "raid_level": "raid1", 00:12:04.624 "superblock": false, 00:12:04.624 "num_base_bdevs": 4, 00:12:04.624 "num_base_bdevs_discovered": 3, 00:12:04.624 "num_base_bdevs_operational": 3, 00:12:04.624 "base_bdevs_list": [ 00:12:04.624 { 00:12:04.624 "name": null, 00:12:04.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.624 "is_configured": false, 00:12:04.624 "data_offset": 0, 00:12:04.624 "data_size": 65536 00:12:04.624 }, 00:12:04.624 { 00:12:04.624 "name": "BaseBdev2", 00:12:04.624 "uuid": "6c3e56c3-8eb9-5dbe-b015-52d692d2c7d1", 00:12:04.624 "is_configured": true, 00:12:04.624 "data_offset": 0, 00:12:04.624 "data_size": 65536 00:12:04.624 }, 00:12:04.624 { 00:12:04.624 "name": "BaseBdev3", 00:12:04.624 "uuid": "650377d8-db52-573c-90d1-ac4488dbe3f7", 00:12:04.624 "is_configured": true, 00:12:04.624 "data_offset": 0, 00:12:04.624 "data_size": 65536 00:12:04.624 }, 00:12:04.624 { 00:12:04.624 "name": "BaseBdev4", 00:12:04.624 "uuid": "1b52b603-b8c2-5278-8a63-8616380c19c7", 00:12:04.624 "is_configured": true, 00:12:04.624 "data_offset": 0, 00:12:04.624 "data_size": 65536 00:12:04.624 } 00:12:04.624 ] 00:12:04.624 }' 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.624 16:24:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.883 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:04.883 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.883 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:04.883 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:04.883 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.883 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.883 16:24:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.883 16:24:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.883 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.143 16:24:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.143 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.143 "name": "raid_bdev1", 00:12:05.143 "uuid": "d0a97eaa-fc08-4b68-b786-f5322cb6c25b", 00:12:05.143 "strip_size_kb": 0, 00:12:05.143 "state": "online", 00:12:05.143 "raid_level": "raid1", 00:12:05.143 "superblock": false, 00:12:05.143 "num_base_bdevs": 4, 00:12:05.143 "num_base_bdevs_discovered": 3, 00:12:05.143 "num_base_bdevs_operational": 3, 00:12:05.144 "base_bdevs_list": [ 00:12:05.144 { 00:12:05.144 "name": null, 00:12:05.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.144 "is_configured": false, 00:12:05.144 "data_offset": 0, 00:12:05.144 "data_size": 65536 00:12:05.144 }, 00:12:05.144 { 00:12:05.144 "name": "BaseBdev2", 00:12:05.144 "uuid": "6c3e56c3-8eb9-5dbe-b015-52d692d2c7d1", 00:12:05.144 "is_configured": true, 00:12:05.144 "data_offset": 0, 00:12:05.144 "data_size": 65536 00:12:05.144 }, 00:12:05.144 { 00:12:05.144 "name": "BaseBdev3", 00:12:05.144 "uuid": "650377d8-db52-573c-90d1-ac4488dbe3f7", 00:12:05.144 "is_configured": true, 00:12:05.144 "data_offset": 0, 00:12:05.144 "data_size": 65536 00:12:05.144 }, 00:12:05.144 { 00:12:05.144 "name": "BaseBdev4", 00:12:05.144 "uuid": "1b52b603-b8c2-5278-8a63-8616380c19c7", 00:12:05.144 "is_configured": true, 00:12:05.144 "data_offset": 0, 00:12:05.144 "data_size": 65536 00:12:05.144 } 00:12:05.144 ] 00:12:05.144 }' 00:12:05.144 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.144 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:05.144 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.144 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:05.144 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:05.144 16:24:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.144 16:24:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.144 [2024-11-28 16:24:56.771072] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:05.144 [2024-11-28 16:24:56.774287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:05.144 [2024-11-28 16:24:56.776146] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:05.144 16:24:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.144 16:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:06.082 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:06.082 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.082 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:06.082 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:06.082 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.082 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.083 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.083 16:24:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.083 16:24:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.083 16:24:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.083 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.083 "name": "raid_bdev1", 00:12:06.083 "uuid": "d0a97eaa-fc08-4b68-b786-f5322cb6c25b", 00:12:06.083 "strip_size_kb": 0, 00:12:06.083 "state": "online", 00:12:06.083 "raid_level": "raid1", 00:12:06.083 "superblock": false, 00:12:06.083 "num_base_bdevs": 4, 00:12:06.083 "num_base_bdevs_discovered": 4, 00:12:06.083 "num_base_bdevs_operational": 4, 00:12:06.083 "process": { 00:12:06.083 "type": "rebuild", 00:12:06.083 "target": "spare", 00:12:06.083 "progress": { 00:12:06.083 "blocks": 20480, 00:12:06.083 "percent": 31 00:12:06.083 } 00:12:06.083 }, 00:12:06.083 "base_bdevs_list": [ 00:12:06.083 { 00:12:06.083 "name": "spare", 00:12:06.083 "uuid": "9a6998b2-7cbb-58a6-b1b5-b1a85c704222", 00:12:06.083 "is_configured": true, 00:12:06.083 "data_offset": 0, 00:12:06.083 "data_size": 65536 00:12:06.083 }, 00:12:06.083 { 00:12:06.083 "name": "BaseBdev2", 00:12:06.083 "uuid": "6c3e56c3-8eb9-5dbe-b015-52d692d2c7d1", 00:12:06.083 "is_configured": true, 00:12:06.083 "data_offset": 0, 00:12:06.083 "data_size": 65536 00:12:06.083 }, 00:12:06.083 { 00:12:06.083 "name": "BaseBdev3", 00:12:06.083 "uuid": "650377d8-db52-573c-90d1-ac4488dbe3f7", 00:12:06.083 "is_configured": true, 00:12:06.083 "data_offset": 0, 00:12:06.083 "data_size": 65536 00:12:06.083 }, 00:12:06.083 { 00:12:06.083 "name": "BaseBdev4", 00:12:06.083 "uuid": "1b52b603-b8c2-5278-8a63-8616380c19c7", 00:12:06.083 "is_configured": true, 00:12:06.083 "data_offset": 0, 00:12:06.083 "data_size": 65536 00:12:06.083 } 00:12:06.083 ] 00:12:06.083 }' 00:12:06.083 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.342 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:06.342 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.342 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:06.342 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:06.342 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:06.342 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:06.342 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:06.342 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:06.342 16:24:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.342 16:24:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.342 [2024-11-28 16:24:57.911861] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:06.342 [2024-11-28 16:24:57.980116] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09ca0 00:12:06.342 16:24:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.342 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:06.342 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:06.342 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:06.342 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.342 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:06.342 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:06.342 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.342 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.342 16:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.342 16:24:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.342 16:24:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.342 16:24:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.342 16:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.342 "name": "raid_bdev1", 00:12:06.342 "uuid": "d0a97eaa-fc08-4b68-b786-f5322cb6c25b", 00:12:06.342 "strip_size_kb": 0, 00:12:06.342 "state": "online", 00:12:06.342 "raid_level": "raid1", 00:12:06.342 "superblock": false, 00:12:06.342 "num_base_bdevs": 4, 00:12:06.342 "num_base_bdevs_discovered": 3, 00:12:06.342 "num_base_bdevs_operational": 3, 00:12:06.342 "process": { 00:12:06.342 "type": "rebuild", 00:12:06.342 "target": "spare", 00:12:06.342 "progress": { 00:12:06.342 "blocks": 24576, 00:12:06.342 "percent": 37 00:12:06.342 } 00:12:06.342 }, 00:12:06.342 "base_bdevs_list": [ 00:12:06.342 { 00:12:06.342 "name": "spare", 00:12:06.342 "uuid": "9a6998b2-7cbb-58a6-b1b5-b1a85c704222", 00:12:06.342 "is_configured": true, 00:12:06.342 "data_offset": 0, 00:12:06.342 "data_size": 65536 00:12:06.342 }, 00:12:06.342 { 00:12:06.342 "name": null, 00:12:06.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.342 "is_configured": false, 00:12:06.342 "data_offset": 0, 00:12:06.342 "data_size": 65536 00:12:06.342 }, 00:12:06.342 { 00:12:06.342 "name": "BaseBdev3", 00:12:06.342 "uuid": "650377d8-db52-573c-90d1-ac4488dbe3f7", 00:12:06.342 "is_configured": true, 00:12:06.342 "data_offset": 0, 00:12:06.342 "data_size": 65536 00:12:06.342 }, 00:12:06.342 { 00:12:06.342 "name": "BaseBdev4", 00:12:06.342 "uuid": "1b52b603-b8c2-5278-8a63-8616380c19c7", 00:12:06.342 "is_configured": true, 00:12:06.342 "data_offset": 0, 00:12:06.342 "data_size": 65536 00:12:06.342 } 00:12:06.342 ] 00:12:06.342 }' 00:12:06.342 16:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.342 16:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:06.342 16:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.602 16:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:06.602 16:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=357 00:12:06.602 16:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:06.602 16:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:06.602 16:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.602 16:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:06.602 16:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:06.602 16:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.602 16:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.602 16:24:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.602 16:24:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.602 16:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.602 16:24:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.602 16:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.602 "name": "raid_bdev1", 00:12:06.602 "uuid": "d0a97eaa-fc08-4b68-b786-f5322cb6c25b", 00:12:06.602 "strip_size_kb": 0, 00:12:06.602 "state": "online", 00:12:06.602 "raid_level": "raid1", 00:12:06.602 "superblock": false, 00:12:06.602 "num_base_bdevs": 4, 00:12:06.602 "num_base_bdevs_discovered": 3, 00:12:06.602 "num_base_bdevs_operational": 3, 00:12:06.602 "process": { 00:12:06.602 "type": "rebuild", 00:12:06.602 "target": "spare", 00:12:06.602 "progress": { 00:12:06.602 "blocks": 26624, 00:12:06.602 "percent": 40 00:12:06.602 } 00:12:06.602 }, 00:12:06.602 "base_bdevs_list": [ 00:12:06.602 { 00:12:06.602 "name": "spare", 00:12:06.602 "uuid": "9a6998b2-7cbb-58a6-b1b5-b1a85c704222", 00:12:06.602 "is_configured": true, 00:12:06.602 "data_offset": 0, 00:12:06.602 "data_size": 65536 00:12:06.602 }, 00:12:06.602 { 00:12:06.602 "name": null, 00:12:06.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.602 "is_configured": false, 00:12:06.602 "data_offset": 0, 00:12:06.602 "data_size": 65536 00:12:06.602 }, 00:12:06.602 { 00:12:06.602 "name": "BaseBdev3", 00:12:06.602 "uuid": "650377d8-db52-573c-90d1-ac4488dbe3f7", 00:12:06.602 "is_configured": true, 00:12:06.602 "data_offset": 0, 00:12:06.602 "data_size": 65536 00:12:06.602 }, 00:12:06.602 { 00:12:06.602 "name": "BaseBdev4", 00:12:06.602 "uuid": "1b52b603-b8c2-5278-8a63-8616380c19c7", 00:12:06.602 "is_configured": true, 00:12:06.602 "data_offset": 0, 00:12:06.602 "data_size": 65536 00:12:06.602 } 00:12:06.602 ] 00:12:06.602 }' 00:12:06.602 16:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.602 16:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:06.602 16:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.602 16:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:06.602 16:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:07.543 16:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:07.543 16:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:07.543 16:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:07.543 16:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:07.543 16:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:07.543 16:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:07.543 16:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.543 16:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.543 16:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.543 16:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.543 16:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.543 16:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.543 "name": "raid_bdev1", 00:12:07.543 "uuid": "d0a97eaa-fc08-4b68-b786-f5322cb6c25b", 00:12:07.543 "strip_size_kb": 0, 00:12:07.543 "state": "online", 00:12:07.543 "raid_level": "raid1", 00:12:07.543 "superblock": false, 00:12:07.543 "num_base_bdevs": 4, 00:12:07.543 "num_base_bdevs_discovered": 3, 00:12:07.543 "num_base_bdevs_operational": 3, 00:12:07.543 "process": { 00:12:07.543 "type": "rebuild", 00:12:07.543 "target": "spare", 00:12:07.543 "progress": { 00:12:07.543 "blocks": 49152, 00:12:07.543 "percent": 75 00:12:07.543 } 00:12:07.543 }, 00:12:07.543 "base_bdevs_list": [ 00:12:07.543 { 00:12:07.543 "name": "spare", 00:12:07.543 "uuid": "9a6998b2-7cbb-58a6-b1b5-b1a85c704222", 00:12:07.543 "is_configured": true, 00:12:07.543 "data_offset": 0, 00:12:07.543 "data_size": 65536 00:12:07.543 }, 00:12:07.543 { 00:12:07.543 "name": null, 00:12:07.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.543 "is_configured": false, 00:12:07.543 "data_offset": 0, 00:12:07.543 "data_size": 65536 00:12:07.543 }, 00:12:07.543 { 00:12:07.543 "name": "BaseBdev3", 00:12:07.543 "uuid": "650377d8-db52-573c-90d1-ac4488dbe3f7", 00:12:07.543 "is_configured": true, 00:12:07.543 "data_offset": 0, 00:12:07.543 "data_size": 65536 00:12:07.543 }, 00:12:07.543 { 00:12:07.543 "name": "BaseBdev4", 00:12:07.543 "uuid": "1b52b603-b8c2-5278-8a63-8616380c19c7", 00:12:07.543 "is_configured": true, 00:12:07.543 "data_offset": 0, 00:12:07.543 "data_size": 65536 00:12:07.543 } 00:12:07.543 ] 00:12:07.543 }' 00:12:07.802 16:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.802 16:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:07.802 16:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.802 16:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:07.802 16:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:08.372 [2024-11-28 16:24:59.986655] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:08.372 [2024-11-28 16:24:59.986738] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:08.372 [2024-11-28 16:24:59.986775] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.943 "name": "raid_bdev1", 00:12:08.943 "uuid": "d0a97eaa-fc08-4b68-b786-f5322cb6c25b", 00:12:08.943 "strip_size_kb": 0, 00:12:08.943 "state": "online", 00:12:08.943 "raid_level": "raid1", 00:12:08.943 "superblock": false, 00:12:08.943 "num_base_bdevs": 4, 00:12:08.943 "num_base_bdevs_discovered": 3, 00:12:08.943 "num_base_bdevs_operational": 3, 00:12:08.943 "base_bdevs_list": [ 00:12:08.943 { 00:12:08.943 "name": "spare", 00:12:08.943 "uuid": "9a6998b2-7cbb-58a6-b1b5-b1a85c704222", 00:12:08.943 "is_configured": true, 00:12:08.943 "data_offset": 0, 00:12:08.943 "data_size": 65536 00:12:08.943 }, 00:12:08.943 { 00:12:08.943 "name": null, 00:12:08.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.943 "is_configured": false, 00:12:08.943 "data_offset": 0, 00:12:08.943 "data_size": 65536 00:12:08.943 }, 00:12:08.943 { 00:12:08.943 "name": "BaseBdev3", 00:12:08.943 "uuid": "650377d8-db52-573c-90d1-ac4488dbe3f7", 00:12:08.943 "is_configured": true, 00:12:08.943 "data_offset": 0, 00:12:08.943 "data_size": 65536 00:12:08.943 }, 00:12:08.943 { 00:12:08.943 "name": "BaseBdev4", 00:12:08.943 "uuid": "1b52b603-b8c2-5278-8a63-8616380c19c7", 00:12:08.943 "is_configured": true, 00:12:08.943 "data_offset": 0, 00:12:08.943 "data_size": 65536 00:12:08.943 } 00:12:08.943 ] 00:12:08.943 }' 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.943 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.943 "name": "raid_bdev1", 00:12:08.943 "uuid": "d0a97eaa-fc08-4b68-b786-f5322cb6c25b", 00:12:08.943 "strip_size_kb": 0, 00:12:08.943 "state": "online", 00:12:08.943 "raid_level": "raid1", 00:12:08.943 "superblock": false, 00:12:08.943 "num_base_bdevs": 4, 00:12:08.943 "num_base_bdevs_discovered": 3, 00:12:08.943 "num_base_bdevs_operational": 3, 00:12:08.943 "base_bdevs_list": [ 00:12:08.943 { 00:12:08.943 "name": "spare", 00:12:08.943 "uuid": "9a6998b2-7cbb-58a6-b1b5-b1a85c704222", 00:12:08.943 "is_configured": true, 00:12:08.943 "data_offset": 0, 00:12:08.943 "data_size": 65536 00:12:08.943 }, 00:12:08.943 { 00:12:08.943 "name": null, 00:12:08.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.943 "is_configured": false, 00:12:08.943 "data_offset": 0, 00:12:08.943 "data_size": 65536 00:12:08.943 }, 00:12:08.944 { 00:12:08.944 "name": "BaseBdev3", 00:12:08.944 "uuid": "650377d8-db52-573c-90d1-ac4488dbe3f7", 00:12:08.944 "is_configured": true, 00:12:08.944 "data_offset": 0, 00:12:08.944 "data_size": 65536 00:12:08.944 }, 00:12:08.944 { 00:12:08.944 "name": "BaseBdev4", 00:12:08.944 "uuid": "1b52b603-b8c2-5278-8a63-8616380c19c7", 00:12:08.944 "is_configured": true, 00:12:08.944 "data_offset": 0, 00:12:08.944 "data_size": 65536 00:12:08.944 } 00:12:08.944 ] 00:12:08.944 }' 00:12:08.944 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.944 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:08.944 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.944 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:08.944 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:08.944 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.944 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.944 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.944 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.944 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.944 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.944 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.944 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.944 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.944 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.944 16:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.944 16:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.944 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.944 16:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.203 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.203 "name": "raid_bdev1", 00:12:09.203 "uuid": "d0a97eaa-fc08-4b68-b786-f5322cb6c25b", 00:12:09.203 "strip_size_kb": 0, 00:12:09.203 "state": "online", 00:12:09.203 "raid_level": "raid1", 00:12:09.203 "superblock": false, 00:12:09.203 "num_base_bdevs": 4, 00:12:09.203 "num_base_bdevs_discovered": 3, 00:12:09.203 "num_base_bdevs_operational": 3, 00:12:09.203 "base_bdevs_list": [ 00:12:09.203 { 00:12:09.203 "name": "spare", 00:12:09.203 "uuid": "9a6998b2-7cbb-58a6-b1b5-b1a85c704222", 00:12:09.203 "is_configured": true, 00:12:09.203 "data_offset": 0, 00:12:09.203 "data_size": 65536 00:12:09.203 }, 00:12:09.203 { 00:12:09.203 "name": null, 00:12:09.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.203 "is_configured": false, 00:12:09.203 "data_offset": 0, 00:12:09.203 "data_size": 65536 00:12:09.203 }, 00:12:09.203 { 00:12:09.203 "name": "BaseBdev3", 00:12:09.203 "uuid": "650377d8-db52-573c-90d1-ac4488dbe3f7", 00:12:09.203 "is_configured": true, 00:12:09.203 "data_offset": 0, 00:12:09.203 "data_size": 65536 00:12:09.203 }, 00:12:09.203 { 00:12:09.203 "name": "BaseBdev4", 00:12:09.203 "uuid": "1b52b603-b8c2-5278-8a63-8616380c19c7", 00:12:09.203 "is_configured": true, 00:12:09.203 "data_offset": 0, 00:12:09.203 "data_size": 65536 00:12:09.203 } 00:12:09.203 ] 00:12:09.203 }' 00:12:09.203 16:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.203 16:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.462 16:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:09.462 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.462 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.462 [2024-11-28 16:25:01.129167] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:09.462 [2024-11-28 16:25:01.129198] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:09.462 [2024-11-28 16:25:01.129276] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:09.462 [2024-11-28 16:25:01.129347] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:09.462 [2024-11-28 16:25:01.129366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:09.462 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.462 16:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.462 16:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:09.462 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.462 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.462 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.462 16:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:09.462 16:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:09.462 16:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:09.462 16:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:09.462 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:09.462 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:09.462 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:09.462 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:09.462 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:09.462 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:09.462 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:09.462 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:09.462 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:09.721 /dev/nbd0 00:12:09.721 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:09.721 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:09.721 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:09.721 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:09.721 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:09.722 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:09.722 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:09.722 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:09.722 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:09.722 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:09.722 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:09.722 1+0 records in 00:12:09.722 1+0 records out 00:12:09.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319014 s, 12.8 MB/s 00:12:09.722 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.722 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:09.722 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.722 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:09.722 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:09.722 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:09.722 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:09.722 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:09.981 /dev/nbd1 00:12:09.981 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:09.981 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:09.981 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:09.981 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:09.982 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:09.982 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:09.982 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:09.982 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:09.982 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:09.982 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:09.982 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:09.982 1+0 records in 00:12:09.982 1+0 records out 00:12:09.982 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341866 s, 12.0 MB/s 00:12:09.982 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.982 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:09.982 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.982 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:09.982 16:25:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:09.982 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:09.982 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:09.982 16:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:09.982 16:25:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:09.982 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:09.982 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:09.982 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:09.982 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:09.982 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:09.982 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:10.242 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:10.242 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:10.242 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:10.242 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:10.242 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:10.242 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:10.242 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:10.242 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:10.242 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.242 16:25:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:10.503 16:25:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:10.503 16:25:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:10.503 16:25:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:10.503 16:25:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:10.503 16:25:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:10.503 16:25:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:10.503 16:25:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:10.503 16:25:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:10.503 16:25:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:10.503 16:25:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 88191 00:12:10.503 16:25:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 88191 ']' 00:12:10.503 16:25:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 88191 00:12:10.503 16:25:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:12:10.503 16:25:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:10.503 16:25:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88191 00:12:10.503 killing process with pid 88191 00:12:10.503 Received shutdown signal, test time was about 60.000000 seconds 00:12:10.503 00:12:10.503 Latency(us) 00:12:10.503 [2024-11-28T16:25:02.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:10.503 [2024-11-28T16:25:02.274Z] =================================================================================================================== 00:12:10.503 [2024-11-28T16:25:02.274Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:10.503 16:25:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:10.503 16:25:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:10.503 16:25:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88191' 00:12:10.503 16:25:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 88191 00:12:10.503 [2024-11-28 16:25:02.221406] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:10.503 16:25:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 88191 00:12:10.763 [2024-11-28 16:25:02.273696] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:10.763 16:25:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:10.763 00:12:10.763 real 0m16.092s 00:12:10.763 user 0m18.046s 00:12:10.763 sys 0m3.274s 00:12:10.763 16:25:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:10.763 16:25:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.763 ************************************ 00:12:10.763 END TEST raid_rebuild_test 00:12:10.763 ************************************ 00:12:11.024 16:25:02 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:12:11.024 16:25:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:11.024 16:25:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:11.024 16:25:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:11.024 ************************************ 00:12:11.024 START TEST raid_rebuild_test_sb 00:12:11.024 ************************************ 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88623 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88623 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 88623 ']' 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:11.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:11.024 16:25:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.024 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:11.024 Zero copy mechanism will not be used. 00:12:11.024 [2024-11-28 16:25:02.690317] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:11.024 [2024-11-28 16:25:02.690448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88623 ] 00:12:11.284 [2024-11-28 16:25:02.854309] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.284 [2024-11-28 16:25:02.897992] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.284 [2024-11-28 16:25:02.939503] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:11.284 [2024-11-28 16:25:02.939541] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:11.853 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:11.853 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:11.853 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:11.853 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:11.853 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.853 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.853 BaseBdev1_malloc 00:12:11.853 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.853 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:11.853 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.853 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.853 [2024-11-28 16:25:03.521397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:11.853 [2024-11-28 16:25:03.521456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.853 [2024-11-28 16:25:03.521498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:11.853 [2024-11-28 16:25:03.521511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.853 [2024-11-28 16:25:03.523574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.853 [2024-11-28 16:25:03.523612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:11.853 BaseBdev1 00:12:11.853 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.853 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:11.853 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.854 BaseBdev2_malloc 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.854 [2024-11-28 16:25:03.559303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:11.854 [2024-11-28 16:25:03.559365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.854 [2024-11-28 16:25:03.559388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:11.854 [2024-11-28 16:25:03.559398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.854 [2024-11-28 16:25:03.561555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.854 [2024-11-28 16:25:03.561589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:11.854 BaseBdev2 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.854 BaseBdev3_malloc 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.854 [2024-11-28 16:25:03.587921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:11.854 [2024-11-28 16:25:03.587969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.854 [2024-11-28 16:25:03.588007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:11.854 [2024-11-28 16:25:03.588016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.854 [2024-11-28 16:25:03.589993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.854 [2024-11-28 16:25:03.590023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:11.854 BaseBdev3 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.854 BaseBdev4_malloc 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.854 [2024-11-28 16:25:03.616253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:11.854 [2024-11-28 16:25:03.616306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.854 [2024-11-28 16:25:03.616345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:11.854 [2024-11-28 16:25:03.616353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.854 [2024-11-28 16:25:03.618324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.854 [2024-11-28 16:25:03.618357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:11.854 BaseBdev4 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.854 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.114 spare_malloc 00:12:12.114 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.114 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:12.114 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.114 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.114 spare_delay 00:12:12.114 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.114 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:12.114 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.114 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.114 [2024-11-28 16:25:03.656718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:12.114 [2024-11-28 16:25:03.656769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.114 [2024-11-28 16:25:03.656789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:12.114 [2024-11-28 16:25:03.656798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.114 [2024-11-28 16:25:03.658790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.114 [2024-11-28 16:25:03.658823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:12.114 spare 00:12:12.114 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.114 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:12.114 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.114 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.114 [2024-11-28 16:25:03.668775] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:12.115 [2024-11-28 16:25:03.670518] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:12.115 [2024-11-28 16:25:03.670590] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:12.115 [2024-11-28 16:25:03.670632] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:12.115 [2024-11-28 16:25:03.670790] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:12.115 [2024-11-28 16:25:03.670808] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:12.115 [2024-11-28 16:25:03.671054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:12.115 [2024-11-28 16:25:03.671209] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:12.115 [2024-11-28 16:25:03.671227] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:12.115 [2024-11-28 16:25:03.671326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.115 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.115 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:12.115 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.115 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.115 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.115 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.115 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.115 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.115 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.115 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.115 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.115 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.115 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.115 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.115 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.115 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.115 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.115 "name": "raid_bdev1", 00:12:12.115 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:12.115 "strip_size_kb": 0, 00:12:12.115 "state": "online", 00:12:12.115 "raid_level": "raid1", 00:12:12.115 "superblock": true, 00:12:12.115 "num_base_bdevs": 4, 00:12:12.115 "num_base_bdevs_discovered": 4, 00:12:12.115 "num_base_bdevs_operational": 4, 00:12:12.115 "base_bdevs_list": [ 00:12:12.115 { 00:12:12.115 "name": "BaseBdev1", 00:12:12.115 "uuid": "3d5235b8-25d3-5301-a9f4-a95e637e3936", 00:12:12.115 "is_configured": true, 00:12:12.115 "data_offset": 2048, 00:12:12.115 "data_size": 63488 00:12:12.115 }, 00:12:12.115 { 00:12:12.115 "name": "BaseBdev2", 00:12:12.115 "uuid": "6280b3c6-804f-5d1c-86b7-5f1b689a0572", 00:12:12.115 "is_configured": true, 00:12:12.115 "data_offset": 2048, 00:12:12.115 "data_size": 63488 00:12:12.115 }, 00:12:12.115 { 00:12:12.115 "name": "BaseBdev3", 00:12:12.115 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:12.115 "is_configured": true, 00:12:12.115 "data_offset": 2048, 00:12:12.115 "data_size": 63488 00:12:12.115 }, 00:12:12.115 { 00:12:12.115 "name": "BaseBdev4", 00:12:12.115 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:12.115 "is_configured": true, 00:12:12.115 "data_offset": 2048, 00:12:12.115 "data_size": 63488 00:12:12.115 } 00:12:12.115 ] 00:12:12.115 }' 00:12:12.115 16:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.115 16:25:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.374 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:12.374 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:12.374 16:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.374 16:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.374 [2024-11-28 16:25:04.144240] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:12.634 16:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.634 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:12.634 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:12.634 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.634 16:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.634 16:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.634 16:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.634 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:12.634 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:12.634 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:12.634 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:12.634 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:12.634 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:12.634 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:12.634 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:12.634 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:12.634 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:12.634 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:12.634 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:12.634 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:12.634 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:12.893 [2024-11-28 16:25:04.411927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:12.894 /dev/nbd0 00:12:12.894 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:12.894 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:12.894 16:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:12.894 16:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:12.894 16:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:12.894 16:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:12.894 16:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:12.894 16:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:12.894 16:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:12.894 16:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:12.894 16:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:12.894 1+0 records in 00:12:12.894 1+0 records out 00:12:12.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032405 s, 12.6 MB/s 00:12:12.894 16:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.894 16:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:12.894 16:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.894 16:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:12.894 16:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:12.894 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:12.894 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:12.894 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:12.894 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:12.894 16:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:18.172 63488+0 records in 00:12:18.172 63488+0 records out 00:12:18.172 32505856 bytes (33 MB, 31 MiB) copied, 5.30614 s, 6.1 MB/s 00:12:18.172 16:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:18.172 16:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:18.172 16:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:18.172 16:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:18.172 16:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:18.172 16:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.172 16:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:18.438 16:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:18.438 [2024-11-28 16:25:10.003395] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.438 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:18.438 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:18.438 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.438 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.438 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:18.438 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:18.438 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.438 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:18.438 16:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.439 16:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.439 [2024-11-28 16:25:10.019443] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:18.439 16:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.439 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:18.439 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.439 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.439 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.439 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.439 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:18.439 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.439 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.439 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.439 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.439 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.439 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.439 16:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.439 16:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.439 16:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.439 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.439 "name": "raid_bdev1", 00:12:18.439 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:18.439 "strip_size_kb": 0, 00:12:18.439 "state": "online", 00:12:18.439 "raid_level": "raid1", 00:12:18.439 "superblock": true, 00:12:18.439 "num_base_bdevs": 4, 00:12:18.439 "num_base_bdevs_discovered": 3, 00:12:18.439 "num_base_bdevs_operational": 3, 00:12:18.439 "base_bdevs_list": [ 00:12:18.439 { 00:12:18.439 "name": null, 00:12:18.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.439 "is_configured": false, 00:12:18.439 "data_offset": 0, 00:12:18.439 "data_size": 63488 00:12:18.439 }, 00:12:18.439 { 00:12:18.439 "name": "BaseBdev2", 00:12:18.439 "uuid": "6280b3c6-804f-5d1c-86b7-5f1b689a0572", 00:12:18.439 "is_configured": true, 00:12:18.439 "data_offset": 2048, 00:12:18.439 "data_size": 63488 00:12:18.439 }, 00:12:18.439 { 00:12:18.439 "name": "BaseBdev3", 00:12:18.439 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:18.439 "is_configured": true, 00:12:18.439 "data_offset": 2048, 00:12:18.439 "data_size": 63488 00:12:18.439 }, 00:12:18.439 { 00:12:18.439 "name": "BaseBdev4", 00:12:18.439 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:18.439 "is_configured": true, 00:12:18.439 "data_offset": 2048, 00:12:18.439 "data_size": 63488 00:12:18.439 } 00:12:18.439 ] 00:12:18.439 }' 00:12:18.439 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.439 16:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.722 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:18.722 16:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.722 16:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.723 [2024-11-28 16:25:10.430763] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:18.723 [2024-11-28 16:25:10.434058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:18.723 [2024-11-28 16:25:10.435957] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:18.723 16:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.723 16:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:19.677 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:19.677 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.677 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:19.677 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:19.677 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.938 "name": "raid_bdev1", 00:12:19.938 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:19.938 "strip_size_kb": 0, 00:12:19.938 "state": "online", 00:12:19.938 "raid_level": "raid1", 00:12:19.938 "superblock": true, 00:12:19.938 "num_base_bdevs": 4, 00:12:19.938 "num_base_bdevs_discovered": 4, 00:12:19.938 "num_base_bdevs_operational": 4, 00:12:19.938 "process": { 00:12:19.938 "type": "rebuild", 00:12:19.938 "target": "spare", 00:12:19.938 "progress": { 00:12:19.938 "blocks": 20480, 00:12:19.938 "percent": 32 00:12:19.938 } 00:12:19.938 }, 00:12:19.938 "base_bdevs_list": [ 00:12:19.938 { 00:12:19.938 "name": "spare", 00:12:19.938 "uuid": "4618bc57-e624-5b41-9eca-2a4eef944592", 00:12:19.938 "is_configured": true, 00:12:19.938 "data_offset": 2048, 00:12:19.938 "data_size": 63488 00:12:19.938 }, 00:12:19.938 { 00:12:19.938 "name": "BaseBdev2", 00:12:19.938 "uuid": "6280b3c6-804f-5d1c-86b7-5f1b689a0572", 00:12:19.938 "is_configured": true, 00:12:19.938 "data_offset": 2048, 00:12:19.938 "data_size": 63488 00:12:19.938 }, 00:12:19.938 { 00:12:19.938 "name": "BaseBdev3", 00:12:19.938 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:19.938 "is_configured": true, 00:12:19.938 "data_offset": 2048, 00:12:19.938 "data_size": 63488 00:12:19.938 }, 00:12:19.938 { 00:12:19.938 "name": "BaseBdev4", 00:12:19.938 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:19.938 "is_configured": true, 00:12:19.938 "data_offset": 2048, 00:12:19.938 "data_size": 63488 00:12:19.938 } 00:12:19.938 ] 00:12:19.938 }' 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.938 [2024-11-28 16:25:11.604002] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:19.938 [2024-11-28 16:25:11.640424] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:19.938 [2024-11-28 16:25:11.640549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.938 [2024-11-28 16:25:11.640589] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:19.938 [2024-11-28 16:25:11.640610] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.938 "name": "raid_bdev1", 00:12:19.938 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:19.938 "strip_size_kb": 0, 00:12:19.938 "state": "online", 00:12:19.938 "raid_level": "raid1", 00:12:19.938 "superblock": true, 00:12:19.938 "num_base_bdevs": 4, 00:12:19.938 "num_base_bdevs_discovered": 3, 00:12:19.938 "num_base_bdevs_operational": 3, 00:12:19.938 "base_bdevs_list": [ 00:12:19.938 { 00:12:19.938 "name": null, 00:12:19.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.938 "is_configured": false, 00:12:19.938 "data_offset": 0, 00:12:19.938 "data_size": 63488 00:12:19.938 }, 00:12:19.938 { 00:12:19.938 "name": "BaseBdev2", 00:12:19.938 "uuid": "6280b3c6-804f-5d1c-86b7-5f1b689a0572", 00:12:19.938 "is_configured": true, 00:12:19.938 "data_offset": 2048, 00:12:19.938 "data_size": 63488 00:12:19.938 }, 00:12:19.938 { 00:12:19.938 "name": "BaseBdev3", 00:12:19.938 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:19.938 "is_configured": true, 00:12:19.938 "data_offset": 2048, 00:12:19.938 "data_size": 63488 00:12:19.938 }, 00:12:19.938 { 00:12:19.938 "name": "BaseBdev4", 00:12:19.938 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:19.938 "is_configured": true, 00:12:19.938 "data_offset": 2048, 00:12:19.938 "data_size": 63488 00:12:19.938 } 00:12:19.938 ] 00:12:19.938 }' 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.938 16:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.508 16:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:20.508 16:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.508 16:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:20.508 16:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:20.508 16:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:20.508 16:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.508 16:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.508 16:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.508 16:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.508 16:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.508 16:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:20.508 "name": "raid_bdev1", 00:12:20.508 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:20.508 "strip_size_kb": 0, 00:12:20.508 "state": "online", 00:12:20.508 "raid_level": "raid1", 00:12:20.508 "superblock": true, 00:12:20.508 "num_base_bdevs": 4, 00:12:20.508 "num_base_bdevs_discovered": 3, 00:12:20.508 "num_base_bdevs_operational": 3, 00:12:20.508 "base_bdevs_list": [ 00:12:20.508 { 00:12:20.508 "name": null, 00:12:20.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.508 "is_configured": false, 00:12:20.508 "data_offset": 0, 00:12:20.508 "data_size": 63488 00:12:20.508 }, 00:12:20.508 { 00:12:20.508 "name": "BaseBdev2", 00:12:20.508 "uuid": "6280b3c6-804f-5d1c-86b7-5f1b689a0572", 00:12:20.508 "is_configured": true, 00:12:20.508 "data_offset": 2048, 00:12:20.508 "data_size": 63488 00:12:20.508 }, 00:12:20.508 { 00:12:20.508 "name": "BaseBdev3", 00:12:20.508 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:20.508 "is_configured": true, 00:12:20.508 "data_offset": 2048, 00:12:20.508 "data_size": 63488 00:12:20.508 }, 00:12:20.508 { 00:12:20.508 "name": "BaseBdev4", 00:12:20.508 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:20.508 "is_configured": true, 00:12:20.508 "data_offset": 2048, 00:12:20.508 "data_size": 63488 00:12:20.508 } 00:12:20.508 ] 00:12:20.508 }' 00:12:20.508 16:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:20.508 16:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:20.508 16:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:20.508 16:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:20.508 16:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:20.508 16:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.508 16:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.508 [2024-11-28 16:25:12.159881] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:20.508 [2024-11-28 16:25:12.163056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:20.508 [2024-11-28 16:25:12.164927] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:20.508 16:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.508 16:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:21.447 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:21.447 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.447 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:21.447 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:21.447 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.447 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.447 16:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.447 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.447 16:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.447 16:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.707 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.707 "name": "raid_bdev1", 00:12:21.707 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:21.707 "strip_size_kb": 0, 00:12:21.707 "state": "online", 00:12:21.707 "raid_level": "raid1", 00:12:21.707 "superblock": true, 00:12:21.707 "num_base_bdevs": 4, 00:12:21.707 "num_base_bdevs_discovered": 4, 00:12:21.707 "num_base_bdevs_operational": 4, 00:12:21.707 "process": { 00:12:21.707 "type": "rebuild", 00:12:21.707 "target": "spare", 00:12:21.707 "progress": { 00:12:21.707 "blocks": 20480, 00:12:21.707 "percent": 32 00:12:21.707 } 00:12:21.707 }, 00:12:21.707 "base_bdevs_list": [ 00:12:21.707 { 00:12:21.707 "name": "spare", 00:12:21.707 "uuid": "4618bc57-e624-5b41-9eca-2a4eef944592", 00:12:21.707 "is_configured": true, 00:12:21.707 "data_offset": 2048, 00:12:21.707 "data_size": 63488 00:12:21.707 }, 00:12:21.707 { 00:12:21.707 "name": "BaseBdev2", 00:12:21.707 "uuid": "6280b3c6-804f-5d1c-86b7-5f1b689a0572", 00:12:21.707 "is_configured": true, 00:12:21.707 "data_offset": 2048, 00:12:21.707 "data_size": 63488 00:12:21.707 }, 00:12:21.707 { 00:12:21.707 "name": "BaseBdev3", 00:12:21.707 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:21.707 "is_configured": true, 00:12:21.707 "data_offset": 2048, 00:12:21.707 "data_size": 63488 00:12:21.707 }, 00:12:21.707 { 00:12:21.707 "name": "BaseBdev4", 00:12:21.707 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:21.707 "is_configured": true, 00:12:21.707 "data_offset": 2048, 00:12:21.707 "data_size": 63488 00:12:21.707 } 00:12:21.707 ] 00:12:21.707 }' 00:12:21.707 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.707 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:21.707 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.707 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:21.707 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:21.707 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:21.707 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:21.707 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:21.707 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:21.707 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:21.707 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:21.707 16:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.707 16:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.707 [2024-11-28 16:25:13.327868] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:21.707 [2024-11-28 16:25:13.468782] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3430 00:12:21.707 16:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.707 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:21.707 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:21.707 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:21.707 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.707 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:21.707 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:21.707 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.967 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.967 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.967 16:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.967 16:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.967 16:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.967 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.967 "name": "raid_bdev1", 00:12:21.967 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:21.967 "strip_size_kb": 0, 00:12:21.967 "state": "online", 00:12:21.967 "raid_level": "raid1", 00:12:21.967 "superblock": true, 00:12:21.967 "num_base_bdevs": 4, 00:12:21.967 "num_base_bdevs_discovered": 3, 00:12:21.967 "num_base_bdevs_operational": 3, 00:12:21.967 "process": { 00:12:21.967 "type": "rebuild", 00:12:21.967 "target": "spare", 00:12:21.967 "progress": { 00:12:21.967 "blocks": 24576, 00:12:21.967 "percent": 38 00:12:21.967 } 00:12:21.967 }, 00:12:21.967 "base_bdevs_list": [ 00:12:21.967 { 00:12:21.967 "name": "spare", 00:12:21.967 "uuid": "4618bc57-e624-5b41-9eca-2a4eef944592", 00:12:21.967 "is_configured": true, 00:12:21.968 "data_offset": 2048, 00:12:21.968 "data_size": 63488 00:12:21.968 }, 00:12:21.968 { 00:12:21.968 "name": null, 00:12:21.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.968 "is_configured": false, 00:12:21.968 "data_offset": 0, 00:12:21.968 "data_size": 63488 00:12:21.968 }, 00:12:21.968 { 00:12:21.968 "name": "BaseBdev3", 00:12:21.968 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:21.968 "is_configured": true, 00:12:21.968 "data_offset": 2048, 00:12:21.968 "data_size": 63488 00:12:21.968 }, 00:12:21.968 { 00:12:21.968 "name": "BaseBdev4", 00:12:21.968 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:21.968 "is_configured": true, 00:12:21.968 "data_offset": 2048, 00:12:21.968 "data_size": 63488 00:12:21.968 } 00:12:21.968 ] 00:12:21.968 }' 00:12:21.968 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.968 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:21.968 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.968 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:21.968 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=372 00:12:21.968 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:21.968 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:21.968 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.968 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:21.968 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:21.968 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.968 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.968 16:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.968 16:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.968 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.968 16:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.968 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.968 "name": "raid_bdev1", 00:12:21.968 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:21.968 "strip_size_kb": 0, 00:12:21.968 "state": "online", 00:12:21.968 "raid_level": "raid1", 00:12:21.968 "superblock": true, 00:12:21.968 "num_base_bdevs": 4, 00:12:21.968 "num_base_bdevs_discovered": 3, 00:12:21.968 "num_base_bdevs_operational": 3, 00:12:21.968 "process": { 00:12:21.968 "type": "rebuild", 00:12:21.968 "target": "spare", 00:12:21.968 "progress": { 00:12:21.968 "blocks": 26624, 00:12:21.968 "percent": 41 00:12:21.968 } 00:12:21.968 }, 00:12:21.968 "base_bdevs_list": [ 00:12:21.968 { 00:12:21.968 "name": "spare", 00:12:21.968 "uuid": "4618bc57-e624-5b41-9eca-2a4eef944592", 00:12:21.968 "is_configured": true, 00:12:21.968 "data_offset": 2048, 00:12:21.968 "data_size": 63488 00:12:21.968 }, 00:12:21.968 { 00:12:21.968 "name": null, 00:12:21.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.968 "is_configured": false, 00:12:21.968 "data_offset": 0, 00:12:21.968 "data_size": 63488 00:12:21.968 }, 00:12:21.968 { 00:12:21.968 "name": "BaseBdev3", 00:12:21.968 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:21.968 "is_configured": true, 00:12:21.968 "data_offset": 2048, 00:12:21.968 "data_size": 63488 00:12:21.968 }, 00:12:21.968 { 00:12:21.968 "name": "BaseBdev4", 00:12:21.968 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:21.968 "is_configured": true, 00:12:21.968 "data_offset": 2048, 00:12:21.968 "data_size": 63488 00:12:21.968 } 00:12:21.968 ] 00:12:21.968 }' 00:12:21.968 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.968 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:21.968 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.228 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:22.228 16:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:23.166 16:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:23.166 16:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:23.166 16:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.166 16:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:23.166 16:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:23.166 16:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.166 16:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.166 16:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.166 16:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.166 16:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.166 16:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.166 16:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.166 "name": "raid_bdev1", 00:12:23.166 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:23.166 "strip_size_kb": 0, 00:12:23.166 "state": "online", 00:12:23.166 "raid_level": "raid1", 00:12:23.166 "superblock": true, 00:12:23.166 "num_base_bdevs": 4, 00:12:23.166 "num_base_bdevs_discovered": 3, 00:12:23.166 "num_base_bdevs_operational": 3, 00:12:23.166 "process": { 00:12:23.166 "type": "rebuild", 00:12:23.166 "target": "spare", 00:12:23.166 "progress": { 00:12:23.167 "blocks": 51200, 00:12:23.167 "percent": 80 00:12:23.167 } 00:12:23.167 }, 00:12:23.167 "base_bdevs_list": [ 00:12:23.167 { 00:12:23.167 "name": "spare", 00:12:23.167 "uuid": "4618bc57-e624-5b41-9eca-2a4eef944592", 00:12:23.167 "is_configured": true, 00:12:23.167 "data_offset": 2048, 00:12:23.167 "data_size": 63488 00:12:23.167 }, 00:12:23.167 { 00:12:23.167 "name": null, 00:12:23.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.167 "is_configured": false, 00:12:23.167 "data_offset": 0, 00:12:23.167 "data_size": 63488 00:12:23.167 }, 00:12:23.167 { 00:12:23.167 "name": "BaseBdev3", 00:12:23.167 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:23.167 "is_configured": true, 00:12:23.167 "data_offset": 2048, 00:12:23.167 "data_size": 63488 00:12:23.167 }, 00:12:23.167 { 00:12:23.167 "name": "BaseBdev4", 00:12:23.167 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:23.167 "is_configured": true, 00:12:23.167 "data_offset": 2048, 00:12:23.167 "data_size": 63488 00:12:23.167 } 00:12:23.167 ] 00:12:23.167 }' 00:12:23.167 16:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.167 16:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:23.167 16:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.167 16:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:23.167 16:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:23.736 [2024-11-28 16:25:15.375392] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:23.736 [2024-11-28 16:25:15.375518] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:23.736 [2024-11-28 16:25:15.375654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.307 16:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:24.307 16:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:24.307 16:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.307 16:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:24.307 16:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:24.307 16:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.307 16:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.307 16:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.307 16:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.307 16:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.307 16:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.307 16:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.307 "name": "raid_bdev1", 00:12:24.307 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:24.307 "strip_size_kb": 0, 00:12:24.307 "state": "online", 00:12:24.307 "raid_level": "raid1", 00:12:24.307 "superblock": true, 00:12:24.307 "num_base_bdevs": 4, 00:12:24.307 "num_base_bdevs_discovered": 3, 00:12:24.307 "num_base_bdevs_operational": 3, 00:12:24.307 "base_bdevs_list": [ 00:12:24.307 { 00:12:24.307 "name": "spare", 00:12:24.307 "uuid": "4618bc57-e624-5b41-9eca-2a4eef944592", 00:12:24.307 "is_configured": true, 00:12:24.307 "data_offset": 2048, 00:12:24.307 "data_size": 63488 00:12:24.307 }, 00:12:24.307 { 00:12:24.307 "name": null, 00:12:24.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.307 "is_configured": false, 00:12:24.307 "data_offset": 0, 00:12:24.307 "data_size": 63488 00:12:24.307 }, 00:12:24.307 { 00:12:24.307 "name": "BaseBdev3", 00:12:24.307 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:24.307 "is_configured": true, 00:12:24.307 "data_offset": 2048, 00:12:24.307 "data_size": 63488 00:12:24.307 }, 00:12:24.307 { 00:12:24.307 "name": "BaseBdev4", 00:12:24.307 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:24.307 "is_configured": true, 00:12:24.307 "data_offset": 2048, 00:12:24.307 "data_size": 63488 00:12:24.307 } 00:12:24.307 ] 00:12:24.307 }' 00:12:24.307 16:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.307 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:24.307 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.307 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:24.307 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:24.307 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:24.307 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.307 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:24.307 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:24.307 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.307 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.307 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.307 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.307 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.566 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.566 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.566 "name": "raid_bdev1", 00:12:24.566 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:24.566 "strip_size_kb": 0, 00:12:24.566 "state": "online", 00:12:24.566 "raid_level": "raid1", 00:12:24.566 "superblock": true, 00:12:24.566 "num_base_bdevs": 4, 00:12:24.566 "num_base_bdevs_discovered": 3, 00:12:24.566 "num_base_bdevs_operational": 3, 00:12:24.566 "base_bdevs_list": [ 00:12:24.566 { 00:12:24.566 "name": "spare", 00:12:24.566 "uuid": "4618bc57-e624-5b41-9eca-2a4eef944592", 00:12:24.566 "is_configured": true, 00:12:24.566 "data_offset": 2048, 00:12:24.566 "data_size": 63488 00:12:24.566 }, 00:12:24.566 { 00:12:24.566 "name": null, 00:12:24.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.566 "is_configured": false, 00:12:24.566 "data_offset": 0, 00:12:24.566 "data_size": 63488 00:12:24.566 }, 00:12:24.566 { 00:12:24.566 "name": "BaseBdev3", 00:12:24.566 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:24.566 "is_configured": true, 00:12:24.566 "data_offset": 2048, 00:12:24.566 "data_size": 63488 00:12:24.566 }, 00:12:24.566 { 00:12:24.566 "name": "BaseBdev4", 00:12:24.566 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:24.566 "is_configured": true, 00:12:24.566 "data_offset": 2048, 00:12:24.566 "data_size": 63488 00:12:24.566 } 00:12:24.566 ] 00:12:24.566 }' 00:12:24.566 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.566 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:24.566 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.567 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:24.567 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:24.567 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.567 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.567 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.567 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.567 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:24.567 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.567 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.567 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.567 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.567 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.567 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.567 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.567 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.567 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.567 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.567 "name": "raid_bdev1", 00:12:24.567 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:24.567 "strip_size_kb": 0, 00:12:24.567 "state": "online", 00:12:24.567 "raid_level": "raid1", 00:12:24.567 "superblock": true, 00:12:24.567 "num_base_bdevs": 4, 00:12:24.567 "num_base_bdevs_discovered": 3, 00:12:24.567 "num_base_bdevs_operational": 3, 00:12:24.567 "base_bdevs_list": [ 00:12:24.567 { 00:12:24.567 "name": "spare", 00:12:24.567 "uuid": "4618bc57-e624-5b41-9eca-2a4eef944592", 00:12:24.567 "is_configured": true, 00:12:24.567 "data_offset": 2048, 00:12:24.567 "data_size": 63488 00:12:24.567 }, 00:12:24.567 { 00:12:24.567 "name": null, 00:12:24.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.567 "is_configured": false, 00:12:24.567 "data_offset": 0, 00:12:24.567 "data_size": 63488 00:12:24.567 }, 00:12:24.567 { 00:12:24.567 "name": "BaseBdev3", 00:12:24.567 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:24.567 "is_configured": true, 00:12:24.567 "data_offset": 2048, 00:12:24.567 "data_size": 63488 00:12:24.567 }, 00:12:24.567 { 00:12:24.567 "name": "BaseBdev4", 00:12:24.567 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:24.567 "is_configured": true, 00:12:24.567 "data_offset": 2048, 00:12:24.567 "data_size": 63488 00:12:24.567 } 00:12:24.567 ] 00:12:24.567 }' 00:12:24.567 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.567 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.135 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:25.135 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.135 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.135 [2024-11-28 16:25:16.685418] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:25.135 [2024-11-28 16:25:16.685488] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:25.135 [2024-11-28 16:25:16.685609] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:25.135 [2024-11-28 16:25:16.685724] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:25.135 [2024-11-28 16:25:16.685820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:25.135 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.135 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.135 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.135 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.135 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:25.135 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.136 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:25.136 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:25.136 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:25.136 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:25.136 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:25.136 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:25.136 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:25.136 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:25.136 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:25.136 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:25.136 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:25.136 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:25.136 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:25.396 /dev/nbd0 00:12:25.396 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:25.396 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:25.396 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:25.396 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:25.396 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:25.396 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:25.396 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:25.396 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:25.396 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:25.396 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:25.396 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.396 1+0 records in 00:12:25.396 1+0 records out 00:12:25.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257069 s, 15.9 MB/s 00:12:25.396 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.396 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:25.396 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.396 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:25.396 16:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:25.396 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.396 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:25.396 16:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:25.656 /dev/nbd1 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.656 1+0 records in 00:12:25.656 1+0 records out 00:12:25.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460599 s, 8.9 MB/s 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:25.656 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:25.916 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:25.916 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:25.916 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:25.916 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:25.916 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:25.916 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:25.916 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:25.916 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:25.916 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:25.916 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:26.175 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:26.175 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:26.175 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:26.175 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:26.175 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:26.175 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:26.175 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.176 [2024-11-28 16:25:17.715479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:26.176 [2024-11-28 16:25:17.715584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.176 [2024-11-28 16:25:17.715611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:26.176 [2024-11-28 16:25:17.715625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.176 [2024-11-28 16:25:17.717816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.176 [2024-11-28 16:25:17.717869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:26.176 [2024-11-28 16:25:17.717956] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:26.176 [2024-11-28 16:25:17.717995] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:26.176 [2024-11-28 16:25:17.718111] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:26.176 [2024-11-28 16:25:17.718208] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:26.176 spare 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.176 [2024-11-28 16:25:17.818098] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:12:26.176 [2024-11-28 16:25:17.818126] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:26.176 [2024-11-28 16:25:17.818387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:26.176 [2024-11-28 16:25:17.818534] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:12:26.176 [2024-11-28 16:25:17.818544] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:12:26.176 [2024-11-28 16:25:17.818666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.176 "name": "raid_bdev1", 00:12:26.176 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:26.176 "strip_size_kb": 0, 00:12:26.176 "state": "online", 00:12:26.176 "raid_level": "raid1", 00:12:26.176 "superblock": true, 00:12:26.176 "num_base_bdevs": 4, 00:12:26.176 "num_base_bdevs_discovered": 3, 00:12:26.176 "num_base_bdevs_operational": 3, 00:12:26.176 "base_bdevs_list": [ 00:12:26.176 { 00:12:26.176 "name": "spare", 00:12:26.176 "uuid": "4618bc57-e624-5b41-9eca-2a4eef944592", 00:12:26.176 "is_configured": true, 00:12:26.176 "data_offset": 2048, 00:12:26.176 "data_size": 63488 00:12:26.176 }, 00:12:26.176 { 00:12:26.176 "name": null, 00:12:26.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.176 "is_configured": false, 00:12:26.176 "data_offset": 2048, 00:12:26.176 "data_size": 63488 00:12:26.176 }, 00:12:26.176 { 00:12:26.176 "name": "BaseBdev3", 00:12:26.176 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:26.176 "is_configured": true, 00:12:26.176 "data_offset": 2048, 00:12:26.176 "data_size": 63488 00:12:26.176 }, 00:12:26.176 { 00:12:26.176 "name": "BaseBdev4", 00:12:26.176 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:26.176 "is_configured": true, 00:12:26.176 "data_offset": 2048, 00:12:26.176 "data_size": 63488 00:12:26.176 } 00:12:26.176 ] 00:12:26.176 }' 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.176 16:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.746 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:26.746 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.746 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:26.746 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:26.746 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.746 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.746 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.746 16:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.746 16:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.746 16:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.746 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.746 "name": "raid_bdev1", 00:12:26.746 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:26.746 "strip_size_kb": 0, 00:12:26.746 "state": "online", 00:12:26.746 "raid_level": "raid1", 00:12:26.746 "superblock": true, 00:12:26.746 "num_base_bdevs": 4, 00:12:26.746 "num_base_bdevs_discovered": 3, 00:12:26.746 "num_base_bdevs_operational": 3, 00:12:26.746 "base_bdevs_list": [ 00:12:26.746 { 00:12:26.746 "name": "spare", 00:12:26.746 "uuid": "4618bc57-e624-5b41-9eca-2a4eef944592", 00:12:26.746 "is_configured": true, 00:12:26.746 "data_offset": 2048, 00:12:26.746 "data_size": 63488 00:12:26.746 }, 00:12:26.746 { 00:12:26.746 "name": null, 00:12:26.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.746 "is_configured": false, 00:12:26.746 "data_offset": 2048, 00:12:26.747 "data_size": 63488 00:12:26.747 }, 00:12:26.747 { 00:12:26.747 "name": "BaseBdev3", 00:12:26.747 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:26.747 "is_configured": true, 00:12:26.747 "data_offset": 2048, 00:12:26.747 "data_size": 63488 00:12:26.747 }, 00:12:26.747 { 00:12:26.747 "name": "BaseBdev4", 00:12:26.747 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:26.747 "is_configured": true, 00:12:26.747 "data_offset": 2048, 00:12:26.747 "data_size": 63488 00:12:26.747 } 00:12:26.747 ] 00:12:26.747 }' 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.747 [2024-11-28 16:25:18.438454] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.747 "name": "raid_bdev1", 00:12:26.747 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:26.747 "strip_size_kb": 0, 00:12:26.747 "state": "online", 00:12:26.747 "raid_level": "raid1", 00:12:26.747 "superblock": true, 00:12:26.747 "num_base_bdevs": 4, 00:12:26.747 "num_base_bdevs_discovered": 2, 00:12:26.747 "num_base_bdevs_operational": 2, 00:12:26.747 "base_bdevs_list": [ 00:12:26.747 { 00:12:26.747 "name": null, 00:12:26.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.747 "is_configured": false, 00:12:26.747 "data_offset": 0, 00:12:26.747 "data_size": 63488 00:12:26.747 }, 00:12:26.747 { 00:12:26.747 "name": null, 00:12:26.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.747 "is_configured": false, 00:12:26.747 "data_offset": 2048, 00:12:26.747 "data_size": 63488 00:12:26.747 }, 00:12:26.747 { 00:12:26.747 "name": "BaseBdev3", 00:12:26.747 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:26.747 "is_configured": true, 00:12:26.747 "data_offset": 2048, 00:12:26.747 "data_size": 63488 00:12:26.747 }, 00:12:26.747 { 00:12:26.747 "name": "BaseBdev4", 00:12:26.747 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:26.747 "is_configured": true, 00:12:26.747 "data_offset": 2048, 00:12:26.747 "data_size": 63488 00:12:26.747 } 00:12:26.747 ] 00:12:26.747 }' 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.747 16:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.316 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:27.316 16:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.316 16:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.316 [2024-11-28 16:25:18.877711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:27.316 [2024-11-28 16:25:18.877970] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:27.316 [2024-11-28 16:25:18.878040] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:27.316 [2024-11-28 16:25:18.878143] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:27.316 [2024-11-28 16:25:18.881477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:27.316 [2024-11-28 16:25:18.883363] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:27.316 16:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.316 16:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:28.254 16:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.254 16:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.254 16:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.254 16:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.254 16:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.254 16:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.254 16:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.254 16:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.254 16:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.254 16:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.254 16:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.254 "name": "raid_bdev1", 00:12:28.254 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:28.254 "strip_size_kb": 0, 00:12:28.254 "state": "online", 00:12:28.254 "raid_level": "raid1", 00:12:28.254 "superblock": true, 00:12:28.254 "num_base_bdevs": 4, 00:12:28.254 "num_base_bdevs_discovered": 3, 00:12:28.254 "num_base_bdevs_operational": 3, 00:12:28.254 "process": { 00:12:28.254 "type": "rebuild", 00:12:28.254 "target": "spare", 00:12:28.254 "progress": { 00:12:28.254 "blocks": 20480, 00:12:28.254 "percent": 32 00:12:28.254 } 00:12:28.254 }, 00:12:28.254 "base_bdevs_list": [ 00:12:28.254 { 00:12:28.254 "name": "spare", 00:12:28.255 "uuid": "4618bc57-e624-5b41-9eca-2a4eef944592", 00:12:28.255 "is_configured": true, 00:12:28.255 "data_offset": 2048, 00:12:28.255 "data_size": 63488 00:12:28.255 }, 00:12:28.255 { 00:12:28.255 "name": null, 00:12:28.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.255 "is_configured": false, 00:12:28.255 "data_offset": 2048, 00:12:28.255 "data_size": 63488 00:12:28.255 }, 00:12:28.255 { 00:12:28.255 "name": "BaseBdev3", 00:12:28.255 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:28.255 "is_configured": true, 00:12:28.255 "data_offset": 2048, 00:12:28.255 "data_size": 63488 00:12:28.255 }, 00:12:28.255 { 00:12:28.255 "name": "BaseBdev4", 00:12:28.255 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:28.255 "is_configured": true, 00:12:28.255 "data_offset": 2048, 00:12:28.255 "data_size": 63488 00:12:28.255 } 00:12:28.255 ] 00:12:28.255 }' 00:12:28.255 16:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.255 16:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:28.255 16:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.514 16:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:28.514 16:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:28.514 16:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.514 16:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.514 [2024-11-28 16:25:20.050100] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:28.514 [2024-11-28 16:25:20.087334] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:28.514 [2024-11-28 16:25:20.087393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.514 [2024-11-28 16:25:20.087408] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:28.514 [2024-11-28 16:25:20.087418] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:28.514 16:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.514 16:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:28.514 16:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.514 16:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.514 16:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.514 16:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.514 16:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:28.514 16:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.514 16:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.514 16:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.514 16:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.514 16:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.514 16:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.514 16:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.514 16:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.514 16:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.514 16:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.514 "name": "raid_bdev1", 00:12:28.514 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:28.514 "strip_size_kb": 0, 00:12:28.514 "state": "online", 00:12:28.514 "raid_level": "raid1", 00:12:28.514 "superblock": true, 00:12:28.514 "num_base_bdevs": 4, 00:12:28.514 "num_base_bdevs_discovered": 2, 00:12:28.514 "num_base_bdevs_operational": 2, 00:12:28.514 "base_bdevs_list": [ 00:12:28.514 { 00:12:28.514 "name": null, 00:12:28.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.514 "is_configured": false, 00:12:28.514 "data_offset": 0, 00:12:28.514 "data_size": 63488 00:12:28.514 }, 00:12:28.514 { 00:12:28.514 "name": null, 00:12:28.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.514 "is_configured": false, 00:12:28.514 "data_offset": 2048, 00:12:28.514 "data_size": 63488 00:12:28.514 }, 00:12:28.514 { 00:12:28.514 "name": "BaseBdev3", 00:12:28.514 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:28.514 "is_configured": true, 00:12:28.514 "data_offset": 2048, 00:12:28.514 "data_size": 63488 00:12:28.514 }, 00:12:28.514 { 00:12:28.514 "name": "BaseBdev4", 00:12:28.514 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:28.514 "is_configured": true, 00:12:28.514 "data_offset": 2048, 00:12:28.514 "data_size": 63488 00:12:28.514 } 00:12:28.514 ] 00:12:28.514 }' 00:12:28.514 16:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.514 16:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.081 16:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:29.081 16:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.081 16:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.081 [2024-11-28 16:25:20.558162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:29.081 [2024-11-28 16:25:20.558265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.081 [2024-11-28 16:25:20.558327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:12:29.081 [2024-11-28 16:25:20.558369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.081 [2024-11-28 16:25:20.558898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.081 [2024-11-28 16:25:20.558961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:29.081 [2024-11-28 16:25:20.559089] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:29.081 [2024-11-28 16:25:20.559141] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:29.081 [2024-11-28 16:25:20.559183] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:29.081 [2024-11-28 16:25:20.559258] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:29.081 [2024-11-28 16:25:20.562282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:29.081 spare 00:12:29.081 16:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.081 16:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:29.081 [2024-11-28 16:25:20.564154] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:30.017 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:30.017 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.017 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:30.017 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:30.017 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.017 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.017 16:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.017 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.017 16:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.017 16:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.017 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.017 "name": "raid_bdev1", 00:12:30.017 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:30.017 "strip_size_kb": 0, 00:12:30.017 "state": "online", 00:12:30.017 "raid_level": "raid1", 00:12:30.017 "superblock": true, 00:12:30.017 "num_base_bdevs": 4, 00:12:30.017 "num_base_bdevs_discovered": 3, 00:12:30.017 "num_base_bdevs_operational": 3, 00:12:30.017 "process": { 00:12:30.017 "type": "rebuild", 00:12:30.017 "target": "spare", 00:12:30.017 "progress": { 00:12:30.017 "blocks": 20480, 00:12:30.017 "percent": 32 00:12:30.017 } 00:12:30.017 }, 00:12:30.017 "base_bdevs_list": [ 00:12:30.017 { 00:12:30.017 "name": "spare", 00:12:30.017 "uuid": "4618bc57-e624-5b41-9eca-2a4eef944592", 00:12:30.017 "is_configured": true, 00:12:30.017 "data_offset": 2048, 00:12:30.017 "data_size": 63488 00:12:30.017 }, 00:12:30.017 { 00:12:30.017 "name": null, 00:12:30.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.017 "is_configured": false, 00:12:30.017 "data_offset": 2048, 00:12:30.017 "data_size": 63488 00:12:30.017 }, 00:12:30.017 { 00:12:30.017 "name": "BaseBdev3", 00:12:30.017 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:30.017 "is_configured": true, 00:12:30.017 "data_offset": 2048, 00:12:30.017 "data_size": 63488 00:12:30.017 }, 00:12:30.017 { 00:12:30.017 "name": "BaseBdev4", 00:12:30.017 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:30.017 "is_configured": true, 00:12:30.017 "data_offset": 2048, 00:12:30.017 "data_size": 63488 00:12:30.017 } 00:12:30.017 ] 00:12:30.017 }' 00:12:30.017 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.017 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:30.017 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.017 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:30.018 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:30.018 16:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.018 16:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.018 [2024-11-28 16:25:21.728985] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:30.018 [2024-11-28 16:25:21.767997] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:30.018 [2024-11-28 16:25:21.768080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.018 [2024-11-28 16:25:21.768099] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:30.018 [2024-11-28 16:25:21.768107] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:30.018 16:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.018 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:30.018 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.018 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.018 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.018 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.018 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:30.018 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.018 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.018 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.018 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.018 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.018 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.018 16:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.018 16:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.276 16:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.276 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.276 "name": "raid_bdev1", 00:12:30.276 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:30.276 "strip_size_kb": 0, 00:12:30.276 "state": "online", 00:12:30.276 "raid_level": "raid1", 00:12:30.276 "superblock": true, 00:12:30.276 "num_base_bdevs": 4, 00:12:30.276 "num_base_bdevs_discovered": 2, 00:12:30.276 "num_base_bdevs_operational": 2, 00:12:30.276 "base_bdevs_list": [ 00:12:30.276 { 00:12:30.276 "name": null, 00:12:30.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.276 "is_configured": false, 00:12:30.276 "data_offset": 0, 00:12:30.276 "data_size": 63488 00:12:30.276 }, 00:12:30.276 { 00:12:30.276 "name": null, 00:12:30.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.276 "is_configured": false, 00:12:30.276 "data_offset": 2048, 00:12:30.276 "data_size": 63488 00:12:30.276 }, 00:12:30.276 { 00:12:30.276 "name": "BaseBdev3", 00:12:30.276 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:30.276 "is_configured": true, 00:12:30.276 "data_offset": 2048, 00:12:30.276 "data_size": 63488 00:12:30.276 }, 00:12:30.276 { 00:12:30.276 "name": "BaseBdev4", 00:12:30.276 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:30.276 "is_configured": true, 00:12:30.276 "data_offset": 2048, 00:12:30.276 "data_size": 63488 00:12:30.276 } 00:12:30.276 ] 00:12:30.276 }' 00:12:30.276 16:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.276 16:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.535 16:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:30.535 16:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.535 16:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:30.535 16:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:30.535 16:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.535 16:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.535 16:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.535 16:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.535 16:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.535 16:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.535 16:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.535 "name": "raid_bdev1", 00:12:30.535 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:30.535 "strip_size_kb": 0, 00:12:30.535 "state": "online", 00:12:30.535 "raid_level": "raid1", 00:12:30.535 "superblock": true, 00:12:30.535 "num_base_bdevs": 4, 00:12:30.535 "num_base_bdevs_discovered": 2, 00:12:30.535 "num_base_bdevs_operational": 2, 00:12:30.535 "base_bdevs_list": [ 00:12:30.535 { 00:12:30.535 "name": null, 00:12:30.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.535 "is_configured": false, 00:12:30.535 "data_offset": 0, 00:12:30.535 "data_size": 63488 00:12:30.535 }, 00:12:30.535 { 00:12:30.535 "name": null, 00:12:30.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.535 "is_configured": false, 00:12:30.535 "data_offset": 2048, 00:12:30.535 "data_size": 63488 00:12:30.535 }, 00:12:30.535 { 00:12:30.535 "name": "BaseBdev3", 00:12:30.535 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:30.535 "is_configured": true, 00:12:30.535 "data_offset": 2048, 00:12:30.535 "data_size": 63488 00:12:30.535 }, 00:12:30.535 { 00:12:30.535 "name": "BaseBdev4", 00:12:30.535 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:30.535 "is_configured": true, 00:12:30.535 "data_offset": 2048, 00:12:30.535 "data_size": 63488 00:12:30.535 } 00:12:30.535 ] 00:12:30.535 }' 00:12:30.535 16:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.794 16:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:30.794 16:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.794 16:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:30.794 16:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:30.794 16:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.794 16:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.794 16:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.794 16:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:30.794 16:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.794 16:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.794 [2024-11-28 16:25:22.382696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:30.794 [2024-11-28 16:25:22.382750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.794 [2024-11-28 16:25:22.382770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:12:30.794 [2024-11-28 16:25:22.382779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.794 [2024-11-28 16:25:22.383229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.794 [2024-11-28 16:25:22.383246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:30.794 [2024-11-28 16:25:22.383316] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:30.794 [2024-11-28 16:25:22.383330] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:30.794 [2024-11-28 16:25:22.383339] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:30.794 [2024-11-28 16:25:22.383348] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:30.794 BaseBdev1 00:12:30.794 16:25:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.794 16:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:31.731 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:31.731 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.731 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.731 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.731 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.731 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:31.731 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.731 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.731 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.731 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.731 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.731 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.731 16:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.731 16:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.731 16:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.731 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.731 "name": "raid_bdev1", 00:12:31.731 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:31.731 "strip_size_kb": 0, 00:12:31.731 "state": "online", 00:12:31.731 "raid_level": "raid1", 00:12:31.731 "superblock": true, 00:12:31.731 "num_base_bdevs": 4, 00:12:31.731 "num_base_bdevs_discovered": 2, 00:12:31.731 "num_base_bdevs_operational": 2, 00:12:31.731 "base_bdevs_list": [ 00:12:31.731 { 00:12:31.731 "name": null, 00:12:31.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.731 "is_configured": false, 00:12:31.731 "data_offset": 0, 00:12:31.731 "data_size": 63488 00:12:31.731 }, 00:12:31.731 { 00:12:31.731 "name": null, 00:12:31.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.731 "is_configured": false, 00:12:31.731 "data_offset": 2048, 00:12:31.731 "data_size": 63488 00:12:31.731 }, 00:12:31.731 { 00:12:31.731 "name": "BaseBdev3", 00:12:31.731 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:31.731 "is_configured": true, 00:12:31.731 "data_offset": 2048, 00:12:31.731 "data_size": 63488 00:12:31.731 }, 00:12:31.731 { 00:12:31.731 "name": "BaseBdev4", 00:12:31.731 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:31.731 "is_configured": true, 00:12:31.731 "data_offset": 2048, 00:12:31.731 "data_size": 63488 00:12:31.731 } 00:12:31.731 ] 00:12:31.731 }' 00:12:31.731 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.731 16:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.302 "name": "raid_bdev1", 00:12:32.302 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:32.302 "strip_size_kb": 0, 00:12:32.302 "state": "online", 00:12:32.302 "raid_level": "raid1", 00:12:32.302 "superblock": true, 00:12:32.302 "num_base_bdevs": 4, 00:12:32.302 "num_base_bdevs_discovered": 2, 00:12:32.302 "num_base_bdevs_operational": 2, 00:12:32.302 "base_bdevs_list": [ 00:12:32.302 { 00:12:32.302 "name": null, 00:12:32.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.302 "is_configured": false, 00:12:32.302 "data_offset": 0, 00:12:32.302 "data_size": 63488 00:12:32.302 }, 00:12:32.302 { 00:12:32.302 "name": null, 00:12:32.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.302 "is_configured": false, 00:12:32.302 "data_offset": 2048, 00:12:32.302 "data_size": 63488 00:12:32.302 }, 00:12:32.302 { 00:12:32.302 "name": "BaseBdev3", 00:12:32.302 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:32.302 "is_configured": true, 00:12:32.302 "data_offset": 2048, 00:12:32.302 "data_size": 63488 00:12:32.302 }, 00:12:32.302 { 00:12:32.302 "name": "BaseBdev4", 00:12:32.302 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:32.302 "is_configured": true, 00:12:32.302 "data_offset": 2048, 00:12:32.302 "data_size": 63488 00:12:32.302 } 00:12:32.302 ] 00:12:32.302 }' 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.302 [2024-11-28 16:25:23.992198] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.302 [2024-11-28 16:25:23.992352] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:32.302 [2024-11-28 16:25:23.992371] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:32.302 request: 00:12:32.302 { 00:12:32.302 "base_bdev": "BaseBdev1", 00:12:32.302 "raid_bdev": "raid_bdev1", 00:12:32.302 "method": "bdev_raid_add_base_bdev", 00:12:32.302 "req_id": 1 00:12:32.302 } 00:12:32.302 Got JSON-RPC error response 00:12:32.302 response: 00:12:32.302 { 00:12:32.302 "code": -22, 00:12:32.302 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:32.302 } 00:12:32.302 16:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:32.302 16:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:32.302 16:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:32.302 16:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:32.302 16:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:32.302 16:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:33.243 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:33.243 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.243 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.243 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.243 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.243 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:33.243 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.243 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.244 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.244 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.503 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.503 16:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.503 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.503 16:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.503 16:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.503 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.503 "name": "raid_bdev1", 00:12:33.503 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:33.503 "strip_size_kb": 0, 00:12:33.503 "state": "online", 00:12:33.503 "raid_level": "raid1", 00:12:33.503 "superblock": true, 00:12:33.503 "num_base_bdevs": 4, 00:12:33.503 "num_base_bdevs_discovered": 2, 00:12:33.503 "num_base_bdevs_operational": 2, 00:12:33.503 "base_bdevs_list": [ 00:12:33.503 { 00:12:33.503 "name": null, 00:12:33.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.503 "is_configured": false, 00:12:33.503 "data_offset": 0, 00:12:33.503 "data_size": 63488 00:12:33.503 }, 00:12:33.503 { 00:12:33.503 "name": null, 00:12:33.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.504 "is_configured": false, 00:12:33.504 "data_offset": 2048, 00:12:33.504 "data_size": 63488 00:12:33.504 }, 00:12:33.504 { 00:12:33.504 "name": "BaseBdev3", 00:12:33.504 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:33.504 "is_configured": true, 00:12:33.504 "data_offset": 2048, 00:12:33.504 "data_size": 63488 00:12:33.504 }, 00:12:33.504 { 00:12:33.504 "name": "BaseBdev4", 00:12:33.504 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:33.504 "is_configured": true, 00:12:33.504 "data_offset": 2048, 00:12:33.504 "data_size": 63488 00:12:33.504 } 00:12:33.504 ] 00:12:33.504 }' 00:12:33.504 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.504 16:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.763 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:33.763 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.763 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:33.763 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:33.763 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.763 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.763 16:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.763 16:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.763 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.763 16:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.763 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.763 "name": "raid_bdev1", 00:12:33.763 "uuid": "2ad4c12b-8fa3-4576-a9b1-675882f7b92e", 00:12:33.763 "strip_size_kb": 0, 00:12:33.763 "state": "online", 00:12:33.763 "raid_level": "raid1", 00:12:33.763 "superblock": true, 00:12:33.763 "num_base_bdevs": 4, 00:12:33.763 "num_base_bdevs_discovered": 2, 00:12:33.763 "num_base_bdevs_operational": 2, 00:12:33.763 "base_bdevs_list": [ 00:12:33.763 { 00:12:33.763 "name": null, 00:12:33.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.763 "is_configured": false, 00:12:33.763 "data_offset": 0, 00:12:33.763 "data_size": 63488 00:12:33.763 }, 00:12:33.763 { 00:12:33.763 "name": null, 00:12:33.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.763 "is_configured": false, 00:12:33.763 "data_offset": 2048, 00:12:33.763 "data_size": 63488 00:12:33.763 }, 00:12:33.763 { 00:12:33.763 "name": "BaseBdev3", 00:12:33.763 "uuid": "8ca52b6b-b364-57ff-8016-e97098a585af", 00:12:33.763 "is_configured": true, 00:12:33.763 "data_offset": 2048, 00:12:33.763 "data_size": 63488 00:12:33.763 }, 00:12:33.763 { 00:12:33.763 "name": "BaseBdev4", 00:12:33.763 "uuid": "e4f280ec-ba15-5c0d-ae29-a6bcb6697df4", 00:12:33.763 "is_configured": true, 00:12:33.763 "data_offset": 2048, 00:12:33.763 "data_size": 63488 00:12:33.763 } 00:12:33.763 ] 00:12:33.763 }' 00:12:33.763 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.023 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:34.023 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.023 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:34.023 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88623 00:12:34.023 16:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 88623 ']' 00:12:34.023 16:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 88623 00:12:34.023 16:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:34.023 16:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:34.023 16:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88623 00:12:34.023 killing process with pid 88623 00:12:34.023 Received shutdown signal, test time was about 60.000000 seconds 00:12:34.023 00:12:34.023 Latency(us) 00:12:34.023 [2024-11-28T16:25:25.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.023 [2024-11-28T16:25:25.794Z] =================================================================================================================== 00:12:34.023 [2024-11-28T16:25:25.794Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:34.023 16:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:34.023 16:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:34.023 16:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88623' 00:12:34.023 16:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 88623 00:12:34.023 [2024-11-28 16:25:25.627122] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:34.023 [2024-11-28 16:25:25.627255] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.023 16:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 88623 00:12:34.023 [2024-11-28 16:25:25.627318] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:34.023 [2024-11-28 16:25:25.627329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:12:34.023 [2024-11-28 16:25:25.679184] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:34.283 16:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:34.283 00:12:34.283 real 0m23.329s 00:12:34.283 user 0m28.477s 00:12:34.283 sys 0m3.973s 00:12:34.283 16:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:34.283 ************************************ 00:12:34.283 END TEST raid_rebuild_test_sb 00:12:34.283 ************************************ 00:12:34.283 16:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.283 16:25:25 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:12:34.283 16:25:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:34.283 16:25:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:34.283 16:25:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:34.283 ************************************ 00:12:34.283 START TEST raid_rebuild_test_io 00:12:34.283 ************************************ 00:12:34.283 16:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:12:34.283 16:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:34.283 16:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:34.283 16:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:34.283 16:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:34.283 16:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89356 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89356 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 89356 ']' 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:34.283 16:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.543 [2024-11-28 16:25:26.097193] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:34.543 [2024-11-28 16:25:26.097358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:12:34.543 Zero copy mechanism will not be used. 00:12:34.543 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89356 ] 00:12:34.543 [2024-11-28 16:25:26.256310] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.543 [2024-11-28 16:25:26.302457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.802 [2024-11-28 16:25:26.344247] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.802 [2024-11-28 16:25:26.344369] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.373 BaseBdev1_malloc 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.373 [2024-11-28 16:25:26.942030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:35.373 [2024-11-28 16:25:26.942099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.373 [2024-11-28 16:25:26.942122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:35.373 [2024-11-28 16:25:26.942137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.373 [2024-11-28 16:25:26.944221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.373 [2024-11-28 16:25:26.944264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:35.373 BaseBdev1 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.373 BaseBdev2_malloc 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.373 [2024-11-28 16:25:26.986552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:35.373 [2024-11-28 16:25:26.986658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.373 [2024-11-28 16:25:26.986701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:35.373 [2024-11-28 16:25:26.986722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.373 [2024-11-28 16:25:26.991360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.373 [2024-11-28 16:25:26.991412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:35.373 BaseBdev2 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.373 16:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.373 BaseBdev3_malloc 00:12:35.373 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.374 [2024-11-28 16:25:27.016911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:35.374 [2024-11-28 16:25:27.017003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.374 [2024-11-28 16:25:27.017042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:35.374 [2024-11-28 16:25:27.017070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.374 [2024-11-28 16:25:27.019133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.374 [2024-11-28 16:25:27.019199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:35.374 BaseBdev3 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.374 BaseBdev4_malloc 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.374 [2024-11-28 16:25:27.045391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:35.374 [2024-11-28 16:25:27.045439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.374 [2024-11-28 16:25:27.045465] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:35.374 [2024-11-28 16:25:27.045473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.374 [2024-11-28 16:25:27.047519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.374 [2024-11-28 16:25:27.047550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:35.374 BaseBdev4 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.374 spare_malloc 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.374 spare_delay 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.374 [2024-11-28 16:25:27.085849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:35.374 [2024-11-28 16:25:27.085893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.374 [2024-11-28 16:25:27.085914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:35.374 [2024-11-28 16:25:27.085923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.374 [2024-11-28 16:25:27.087971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.374 [2024-11-28 16:25:27.088005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:35.374 spare 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.374 [2024-11-28 16:25:27.097916] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:35.374 [2024-11-28 16:25:27.099664] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:35.374 [2024-11-28 16:25:27.099744] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:35.374 [2024-11-28 16:25:27.099788] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:35.374 [2024-11-28 16:25:27.099876] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:35.374 [2024-11-28 16:25:27.099886] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:35.374 [2024-11-28 16:25:27.100127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:35.374 [2024-11-28 16:25:27.100301] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:35.374 [2024-11-28 16:25:27.100320] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:35.374 [2024-11-28 16:25:27.100447] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.374 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.635 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.635 "name": "raid_bdev1", 00:12:35.635 "uuid": "2e4a4222-5146-4629-9708-7568ebbcc8d7", 00:12:35.635 "strip_size_kb": 0, 00:12:35.635 "state": "online", 00:12:35.635 "raid_level": "raid1", 00:12:35.635 "superblock": false, 00:12:35.635 "num_base_bdevs": 4, 00:12:35.635 "num_base_bdevs_discovered": 4, 00:12:35.635 "num_base_bdevs_operational": 4, 00:12:35.635 "base_bdevs_list": [ 00:12:35.635 { 00:12:35.635 "name": "BaseBdev1", 00:12:35.635 "uuid": "a0d456fd-f87b-51ea-a89e-a1ab065c4166", 00:12:35.635 "is_configured": true, 00:12:35.635 "data_offset": 0, 00:12:35.635 "data_size": 65536 00:12:35.635 }, 00:12:35.635 { 00:12:35.635 "name": "BaseBdev2", 00:12:35.635 "uuid": "e60d4297-dc6d-5c1a-8da7-7f259c1d77bb", 00:12:35.635 "is_configured": true, 00:12:35.635 "data_offset": 0, 00:12:35.635 "data_size": 65536 00:12:35.635 }, 00:12:35.635 { 00:12:35.635 "name": "BaseBdev3", 00:12:35.635 "uuid": "924520e9-0a51-5e9b-9788-cebe05dd8030", 00:12:35.635 "is_configured": true, 00:12:35.635 "data_offset": 0, 00:12:35.635 "data_size": 65536 00:12:35.635 }, 00:12:35.635 { 00:12:35.635 "name": "BaseBdev4", 00:12:35.635 "uuid": "6cfc04e1-988a-5e6e-ac28-85ad6754efc6", 00:12:35.635 "is_configured": true, 00:12:35.635 "data_offset": 0, 00:12:35.635 "data_size": 65536 00:12:35.635 } 00:12:35.635 ] 00:12:35.635 }' 00:12:35.635 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.635 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.896 [2024-11-28 16:25:27.549382] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.896 [2024-11-28 16:25:27.636933] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.896 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.156 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.156 "name": "raid_bdev1", 00:12:36.156 "uuid": "2e4a4222-5146-4629-9708-7568ebbcc8d7", 00:12:36.156 "strip_size_kb": 0, 00:12:36.156 "state": "online", 00:12:36.156 "raid_level": "raid1", 00:12:36.156 "superblock": false, 00:12:36.156 "num_base_bdevs": 4, 00:12:36.156 "num_base_bdevs_discovered": 3, 00:12:36.156 "num_base_bdevs_operational": 3, 00:12:36.156 "base_bdevs_list": [ 00:12:36.156 { 00:12:36.156 "name": null, 00:12:36.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.156 "is_configured": false, 00:12:36.156 "data_offset": 0, 00:12:36.156 "data_size": 65536 00:12:36.156 }, 00:12:36.156 { 00:12:36.156 "name": "BaseBdev2", 00:12:36.156 "uuid": "e60d4297-dc6d-5c1a-8da7-7f259c1d77bb", 00:12:36.156 "is_configured": true, 00:12:36.156 "data_offset": 0, 00:12:36.156 "data_size": 65536 00:12:36.156 }, 00:12:36.156 { 00:12:36.156 "name": "BaseBdev3", 00:12:36.156 "uuid": "924520e9-0a51-5e9b-9788-cebe05dd8030", 00:12:36.156 "is_configured": true, 00:12:36.156 "data_offset": 0, 00:12:36.156 "data_size": 65536 00:12:36.156 }, 00:12:36.156 { 00:12:36.156 "name": "BaseBdev4", 00:12:36.156 "uuid": "6cfc04e1-988a-5e6e-ac28-85ad6754efc6", 00:12:36.156 "is_configured": true, 00:12:36.156 "data_offset": 0, 00:12:36.156 "data_size": 65536 00:12:36.156 } 00:12:36.156 ] 00:12:36.156 }' 00:12:36.156 16:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.156 16:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.156 [2024-11-28 16:25:27.726767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:36.156 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:36.156 Zero copy mechanism will not be used. 00:12:36.156 Running I/O for 60 seconds... 00:12:36.417 16:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:36.417 16:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.417 16:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.417 [2024-11-28 16:25:28.073893] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:36.417 16:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.417 16:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:36.417 [2024-11-28 16:25:28.146640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:36.417 [2024-11-28 16:25:28.148639] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:36.677 [2024-11-28 16:25:28.276703] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:36.936 [2024-11-28 16:25:28.493725] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:36.936 [2024-11-28 16:25:28.494342] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:37.196 169.00 IOPS, 507.00 MiB/s [2024-11-28T16:25:28.967Z] [2024-11-28 16:25:28.832944] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:37.196 [2024-11-28 16:25:28.834052] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:37.456 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:37.456 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.456 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:37.456 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:37.456 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.456 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.456 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.456 16:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.457 16:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.457 16:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.457 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.457 "name": "raid_bdev1", 00:12:37.457 "uuid": "2e4a4222-5146-4629-9708-7568ebbcc8d7", 00:12:37.457 "strip_size_kb": 0, 00:12:37.457 "state": "online", 00:12:37.457 "raid_level": "raid1", 00:12:37.457 "superblock": false, 00:12:37.457 "num_base_bdevs": 4, 00:12:37.457 "num_base_bdevs_discovered": 4, 00:12:37.457 "num_base_bdevs_operational": 4, 00:12:37.457 "process": { 00:12:37.457 "type": "rebuild", 00:12:37.457 "target": "spare", 00:12:37.457 "progress": { 00:12:37.457 "blocks": 10240, 00:12:37.457 "percent": 15 00:12:37.457 } 00:12:37.457 }, 00:12:37.457 "base_bdevs_list": [ 00:12:37.457 { 00:12:37.457 "name": "spare", 00:12:37.457 "uuid": "86623e98-dad3-52bb-a25b-e7cbed747183", 00:12:37.457 "is_configured": true, 00:12:37.457 "data_offset": 0, 00:12:37.457 "data_size": 65536 00:12:37.457 }, 00:12:37.457 { 00:12:37.457 "name": "BaseBdev2", 00:12:37.457 "uuid": "e60d4297-dc6d-5c1a-8da7-7f259c1d77bb", 00:12:37.457 "is_configured": true, 00:12:37.457 "data_offset": 0, 00:12:37.457 "data_size": 65536 00:12:37.457 }, 00:12:37.457 { 00:12:37.457 "name": "BaseBdev3", 00:12:37.457 "uuid": "924520e9-0a51-5e9b-9788-cebe05dd8030", 00:12:37.457 "is_configured": true, 00:12:37.457 "data_offset": 0, 00:12:37.457 "data_size": 65536 00:12:37.457 }, 00:12:37.457 { 00:12:37.457 "name": "BaseBdev4", 00:12:37.457 "uuid": "6cfc04e1-988a-5e6e-ac28-85ad6754efc6", 00:12:37.457 "is_configured": true, 00:12:37.457 "data_offset": 0, 00:12:37.457 "data_size": 65536 00:12:37.457 } 00:12:37.457 ] 00:12:37.457 }' 00:12:37.457 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.457 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:37.457 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.718 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:37.718 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:37.718 16:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.718 16:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.718 [2024-11-28 16:25:29.250583] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:37.718 [2024-11-28 16:25:29.330264] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:37.718 [2024-11-28 16:25:29.440157] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:37.718 [2024-11-28 16:25:29.454908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.718 [2024-11-28 16:25:29.454962] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:37.718 [2024-11-28 16:25:29.454974] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:37.718 [2024-11-28 16:25:29.477392] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:37.978 16:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.978 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:37.978 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.978 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.978 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.978 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.978 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.978 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.978 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.978 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.978 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.978 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.978 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.978 16:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.978 16:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.978 16:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.978 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.978 "name": "raid_bdev1", 00:12:37.978 "uuid": "2e4a4222-5146-4629-9708-7568ebbcc8d7", 00:12:37.978 "strip_size_kb": 0, 00:12:37.978 "state": "online", 00:12:37.978 "raid_level": "raid1", 00:12:37.978 "superblock": false, 00:12:37.978 "num_base_bdevs": 4, 00:12:37.978 "num_base_bdevs_discovered": 3, 00:12:37.979 "num_base_bdevs_operational": 3, 00:12:37.979 "base_bdevs_list": [ 00:12:37.979 { 00:12:37.979 "name": null, 00:12:37.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.979 "is_configured": false, 00:12:37.979 "data_offset": 0, 00:12:37.979 "data_size": 65536 00:12:37.979 }, 00:12:37.979 { 00:12:37.979 "name": "BaseBdev2", 00:12:37.979 "uuid": "e60d4297-dc6d-5c1a-8da7-7f259c1d77bb", 00:12:37.979 "is_configured": true, 00:12:37.979 "data_offset": 0, 00:12:37.979 "data_size": 65536 00:12:37.979 }, 00:12:37.979 { 00:12:37.979 "name": "BaseBdev3", 00:12:37.979 "uuid": "924520e9-0a51-5e9b-9788-cebe05dd8030", 00:12:37.979 "is_configured": true, 00:12:37.979 "data_offset": 0, 00:12:37.979 "data_size": 65536 00:12:37.979 }, 00:12:37.979 { 00:12:37.979 "name": "BaseBdev4", 00:12:37.979 "uuid": "6cfc04e1-988a-5e6e-ac28-85ad6754efc6", 00:12:37.979 "is_configured": true, 00:12:37.979 "data_offset": 0, 00:12:37.979 "data_size": 65536 00:12:37.979 } 00:12:37.979 ] 00:12:37.979 }' 00:12:37.979 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.979 16:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.239 161.50 IOPS, 484.50 MiB/s [2024-11-28T16:25:30.010Z] 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:38.239 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.239 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:38.239 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:38.239 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.239 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.239 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.239 16:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.239 16:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.239 16:25:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.239 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.239 "name": "raid_bdev1", 00:12:38.239 "uuid": "2e4a4222-5146-4629-9708-7568ebbcc8d7", 00:12:38.239 "strip_size_kb": 0, 00:12:38.239 "state": "online", 00:12:38.239 "raid_level": "raid1", 00:12:38.239 "superblock": false, 00:12:38.239 "num_base_bdevs": 4, 00:12:38.239 "num_base_bdevs_discovered": 3, 00:12:38.239 "num_base_bdevs_operational": 3, 00:12:38.239 "base_bdevs_list": [ 00:12:38.239 { 00:12:38.239 "name": null, 00:12:38.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.239 "is_configured": false, 00:12:38.239 "data_offset": 0, 00:12:38.239 "data_size": 65536 00:12:38.239 }, 00:12:38.239 { 00:12:38.239 "name": "BaseBdev2", 00:12:38.239 "uuid": "e60d4297-dc6d-5c1a-8da7-7f259c1d77bb", 00:12:38.239 "is_configured": true, 00:12:38.239 "data_offset": 0, 00:12:38.239 "data_size": 65536 00:12:38.239 }, 00:12:38.239 { 00:12:38.239 "name": "BaseBdev3", 00:12:38.239 "uuid": "924520e9-0a51-5e9b-9788-cebe05dd8030", 00:12:38.239 "is_configured": true, 00:12:38.239 "data_offset": 0, 00:12:38.240 "data_size": 65536 00:12:38.240 }, 00:12:38.240 { 00:12:38.240 "name": "BaseBdev4", 00:12:38.240 "uuid": "6cfc04e1-988a-5e6e-ac28-85ad6754efc6", 00:12:38.240 "is_configured": true, 00:12:38.240 "data_offset": 0, 00:12:38.240 "data_size": 65536 00:12:38.240 } 00:12:38.240 ] 00:12:38.240 }' 00:12:38.240 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.240 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:38.240 16:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.500 16:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:38.500 16:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:38.500 16:25:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.500 16:25:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.500 [2024-11-28 16:25:30.050513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:38.500 16:25:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.500 16:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:38.500 [2024-11-28 16:25:30.110405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:38.500 [2024-11-28 16:25:30.112412] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:38.500 [2024-11-28 16:25:30.214539] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:38.500 [2024-11-28 16:25:30.214965] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:38.760 [2024-11-28 16:25:30.436010] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:38.760 [2024-11-28 16:25:30.436642] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:39.020 166.33 IOPS, 499.00 MiB/s [2024-11-28T16:25:30.791Z] [2024-11-28 16:25:30.777871] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:39.284 [2024-11-28 16:25:30.993781] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:39.284 [2024-11-28 16:25:30.994437] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:39.565 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:39.565 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.565 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:39.565 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:39.565 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.565 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.565 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.565 16:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.565 16:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.565 16:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.565 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.565 "name": "raid_bdev1", 00:12:39.565 "uuid": "2e4a4222-5146-4629-9708-7568ebbcc8d7", 00:12:39.565 "strip_size_kb": 0, 00:12:39.565 "state": "online", 00:12:39.565 "raid_level": "raid1", 00:12:39.565 "superblock": false, 00:12:39.565 "num_base_bdevs": 4, 00:12:39.565 "num_base_bdevs_discovered": 4, 00:12:39.565 "num_base_bdevs_operational": 4, 00:12:39.565 "process": { 00:12:39.565 "type": "rebuild", 00:12:39.565 "target": "spare", 00:12:39.565 "progress": { 00:12:39.565 "blocks": 10240, 00:12:39.565 "percent": 15 00:12:39.565 } 00:12:39.565 }, 00:12:39.565 "base_bdevs_list": [ 00:12:39.565 { 00:12:39.565 "name": "spare", 00:12:39.565 "uuid": "86623e98-dad3-52bb-a25b-e7cbed747183", 00:12:39.565 "is_configured": true, 00:12:39.565 "data_offset": 0, 00:12:39.565 "data_size": 65536 00:12:39.565 }, 00:12:39.565 { 00:12:39.565 "name": "BaseBdev2", 00:12:39.565 "uuid": "e60d4297-dc6d-5c1a-8da7-7f259c1d77bb", 00:12:39.565 "is_configured": true, 00:12:39.565 "data_offset": 0, 00:12:39.565 "data_size": 65536 00:12:39.565 }, 00:12:39.565 { 00:12:39.565 "name": "BaseBdev3", 00:12:39.565 "uuid": "924520e9-0a51-5e9b-9788-cebe05dd8030", 00:12:39.565 "is_configured": true, 00:12:39.565 "data_offset": 0, 00:12:39.565 "data_size": 65536 00:12:39.565 }, 00:12:39.565 { 00:12:39.565 "name": "BaseBdev4", 00:12:39.565 "uuid": "6cfc04e1-988a-5e6e-ac28-85ad6754efc6", 00:12:39.565 "is_configured": true, 00:12:39.565 "data_offset": 0, 00:12:39.565 "data_size": 65536 00:12:39.565 } 00:12:39.565 ] 00:12:39.565 }' 00:12:39.565 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.565 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:39.565 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.565 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:39.565 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:39.565 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:39.565 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:39.565 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:39.565 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:39.565 16:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.565 16:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.565 [2024-11-28 16:25:31.252951] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:39.841 [2024-11-28 16:25:31.428342] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:12:39.841 [2024-11-28 16:25:31.428379] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:12:39.841 [2024-11-28 16:25:31.430657] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:39.841 16:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.841 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:39.841 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:39.841 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:39.841 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.841 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:39.841 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:39.841 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.841 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.841 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.841 16:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.842 16:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.842 16:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.842 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.842 "name": "raid_bdev1", 00:12:39.842 "uuid": "2e4a4222-5146-4629-9708-7568ebbcc8d7", 00:12:39.842 "strip_size_kb": 0, 00:12:39.842 "state": "online", 00:12:39.842 "raid_level": "raid1", 00:12:39.842 "superblock": false, 00:12:39.842 "num_base_bdevs": 4, 00:12:39.842 "num_base_bdevs_discovered": 3, 00:12:39.842 "num_base_bdevs_operational": 3, 00:12:39.842 "process": { 00:12:39.842 "type": "rebuild", 00:12:39.842 "target": "spare", 00:12:39.842 "progress": { 00:12:39.842 "blocks": 14336, 00:12:39.842 "percent": 21 00:12:39.842 } 00:12:39.842 }, 00:12:39.842 "base_bdevs_list": [ 00:12:39.842 { 00:12:39.842 "name": "spare", 00:12:39.842 "uuid": "86623e98-dad3-52bb-a25b-e7cbed747183", 00:12:39.842 "is_configured": true, 00:12:39.842 "data_offset": 0, 00:12:39.842 "data_size": 65536 00:12:39.842 }, 00:12:39.842 { 00:12:39.842 "name": null, 00:12:39.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.842 "is_configured": false, 00:12:39.842 "data_offset": 0, 00:12:39.842 "data_size": 65536 00:12:39.842 }, 00:12:39.842 { 00:12:39.842 "name": "BaseBdev3", 00:12:39.842 "uuid": "924520e9-0a51-5e9b-9788-cebe05dd8030", 00:12:39.842 "is_configured": true, 00:12:39.842 "data_offset": 0, 00:12:39.842 "data_size": 65536 00:12:39.842 }, 00:12:39.842 { 00:12:39.842 "name": "BaseBdev4", 00:12:39.842 "uuid": "6cfc04e1-988a-5e6e-ac28-85ad6754efc6", 00:12:39.842 "is_configured": true, 00:12:39.842 "data_offset": 0, 00:12:39.842 "data_size": 65536 00:12:39.842 } 00:12:39.842 ] 00:12:39.842 }' 00:12:39.842 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.842 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:39.842 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.842 [2024-11-28 16:25:31.548298] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:39.842 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:39.842 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=390 00:12:39.842 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:39.842 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:39.842 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.842 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:39.842 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:39.842 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.842 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.842 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.842 16:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.842 16:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.842 16:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.102 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.102 "name": "raid_bdev1", 00:12:40.102 "uuid": "2e4a4222-5146-4629-9708-7568ebbcc8d7", 00:12:40.102 "strip_size_kb": 0, 00:12:40.102 "state": "online", 00:12:40.102 "raid_level": "raid1", 00:12:40.102 "superblock": false, 00:12:40.102 "num_base_bdevs": 4, 00:12:40.102 "num_base_bdevs_discovered": 3, 00:12:40.102 "num_base_bdevs_operational": 3, 00:12:40.102 "process": { 00:12:40.102 "type": "rebuild", 00:12:40.102 "target": "spare", 00:12:40.102 "progress": { 00:12:40.102 "blocks": 16384, 00:12:40.102 "percent": 25 00:12:40.102 } 00:12:40.102 }, 00:12:40.102 "base_bdevs_list": [ 00:12:40.102 { 00:12:40.102 "name": "spare", 00:12:40.102 "uuid": "86623e98-dad3-52bb-a25b-e7cbed747183", 00:12:40.102 "is_configured": true, 00:12:40.102 "data_offset": 0, 00:12:40.102 "data_size": 65536 00:12:40.102 }, 00:12:40.102 { 00:12:40.102 "name": null, 00:12:40.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.102 "is_configured": false, 00:12:40.102 "data_offset": 0, 00:12:40.102 "data_size": 65536 00:12:40.102 }, 00:12:40.102 { 00:12:40.102 "name": "BaseBdev3", 00:12:40.102 "uuid": "924520e9-0a51-5e9b-9788-cebe05dd8030", 00:12:40.102 "is_configured": true, 00:12:40.102 "data_offset": 0, 00:12:40.102 "data_size": 65536 00:12:40.102 }, 00:12:40.102 { 00:12:40.102 "name": "BaseBdev4", 00:12:40.102 "uuid": "6cfc04e1-988a-5e6e-ac28-85ad6754efc6", 00:12:40.102 "is_configured": true, 00:12:40.102 "data_offset": 0, 00:12:40.102 "data_size": 65536 00:12:40.102 } 00:12:40.102 ] 00:12:40.102 }' 00:12:40.102 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.102 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.103 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.103 150.50 IOPS, 451.50 MiB/s [2024-11-28T16:25:31.874Z] 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.103 16:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:40.103 [2024-11-28 16:25:31.803025] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:40.362 [2024-11-28 16:25:31.928739] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:40.622 [2024-11-28 16:25:32.269248] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:40.622 [2024-11-28 16:25:32.270050] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:41.190 [2024-11-28 16:25:32.718238] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:41.190 [2024-11-28 16:25:32.718586] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:41.190 131.20 IOPS, 393.60 MiB/s [2024-11-28T16:25:32.961Z] 16:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:41.190 16:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:41.190 16:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.190 16:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:41.190 16:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:41.190 16:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.190 16:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.190 16:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.190 16:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.190 16:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.190 16:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.190 16:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.190 "name": "raid_bdev1", 00:12:41.190 "uuid": "2e4a4222-5146-4629-9708-7568ebbcc8d7", 00:12:41.190 "strip_size_kb": 0, 00:12:41.190 "state": "online", 00:12:41.190 "raid_level": "raid1", 00:12:41.190 "superblock": false, 00:12:41.190 "num_base_bdevs": 4, 00:12:41.190 "num_base_bdevs_discovered": 3, 00:12:41.190 "num_base_bdevs_operational": 3, 00:12:41.190 "process": { 00:12:41.190 "type": "rebuild", 00:12:41.190 "target": "spare", 00:12:41.190 "progress": { 00:12:41.190 "blocks": 32768, 00:12:41.190 "percent": 50 00:12:41.190 } 00:12:41.190 }, 00:12:41.190 "base_bdevs_list": [ 00:12:41.190 { 00:12:41.190 "name": "spare", 00:12:41.190 "uuid": "86623e98-dad3-52bb-a25b-e7cbed747183", 00:12:41.190 "is_configured": true, 00:12:41.190 "data_offset": 0, 00:12:41.190 "data_size": 65536 00:12:41.190 }, 00:12:41.190 { 00:12:41.190 "name": null, 00:12:41.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.190 "is_configured": false, 00:12:41.190 "data_offset": 0, 00:12:41.190 "data_size": 65536 00:12:41.190 }, 00:12:41.190 { 00:12:41.190 "name": "BaseBdev3", 00:12:41.190 "uuid": "924520e9-0a51-5e9b-9788-cebe05dd8030", 00:12:41.190 "is_configured": true, 00:12:41.190 "data_offset": 0, 00:12:41.190 "data_size": 65536 00:12:41.190 }, 00:12:41.190 { 00:12:41.190 "name": "BaseBdev4", 00:12:41.190 "uuid": "6cfc04e1-988a-5e6e-ac28-85ad6754efc6", 00:12:41.190 "is_configured": true, 00:12:41.190 "data_offset": 0, 00:12:41.190 "data_size": 65536 00:12:41.190 } 00:12:41.190 ] 00:12:41.190 }' 00:12:41.190 16:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.190 16:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:41.190 16:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.190 16:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:41.190 16:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:41.448 [2024-11-28 16:25:33.155814] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:41.706 [2024-11-28 16:25:33.377408] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:42.224 116.67 IOPS, 350.00 MiB/s [2024-11-28T16:25:33.995Z] 16:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:42.224 16:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.224 16:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.224 16:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.225 16:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.225 16:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.225 16:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.225 16:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.225 16:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.225 16:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.225 16:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.225 [2024-11-28 16:25:33.917949] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:42.225 16:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.225 "name": "raid_bdev1", 00:12:42.225 "uuid": "2e4a4222-5146-4629-9708-7568ebbcc8d7", 00:12:42.225 "strip_size_kb": 0, 00:12:42.225 "state": "online", 00:12:42.225 "raid_level": "raid1", 00:12:42.225 "superblock": false, 00:12:42.225 "num_base_bdevs": 4, 00:12:42.225 "num_base_bdevs_discovered": 3, 00:12:42.225 "num_base_bdevs_operational": 3, 00:12:42.225 "process": { 00:12:42.225 "type": "rebuild", 00:12:42.225 "target": "spare", 00:12:42.225 "progress": { 00:12:42.225 "blocks": 49152, 00:12:42.225 "percent": 75 00:12:42.225 } 00:12:42.225 }, 00:12:42.225 "base_bdevs_list": [ 00:12:42.225 { 00:12:42.225 "name": "spare", 00:12:42.225 "uuid": "86623e98-dad3-52bb-a25b-e7cbed747183", 00:12:42.225 "is_configured": true, 00:12:42.225 "data_offset": 0, 00:12:42.225 "data_size": 65536 00:12:42.225 }, 00:12:42.225 { 00:12:42.225 "name": null, 00:12:42.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.225 "is_configured": false, 00:12:42.225 "data_offset": 0, 00:12:42.225 "data_size": 65536 00:12:42.225 }, 00:12:42.225 { 00:12:42.225 "name": "BaseBdev3", 00:12:42.225 "uuid": "924520e9-0a51-5e9b-9788-cebe05dd8030", 00:12:42.225 "is_configured": true, 00:12:42.225 "data_offset": 0, 00:12:42.225 "data_size": 65536 00:12:42.225 }, 00:12:42.225 { 00:12:42.225 "name": "BaseBdev4", 00:12:42.225 "uuid": "6cfc04e1-988a-5e6e-ac28-85ad6754efc6", 00:12:42.225 "is_configured": true, 00:12:42.225 "data_offset": 0, 00:12:42.225 "data_size": 65536 00:12:42.225 } 00:12:42.225 ] 00:12:42.225 }' 00:12:42.225 16:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.225 16:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.225 16:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.484 16:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.484 16:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:42.484 [2024-11-28 16:25:34.246429] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:43.054 [2024-11-28 16:25:34.681570] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:43.054 104.71 IOPS, 314.14 MiB/s [2024-11-28T16:25:34.825Z] [2024-11-28 16:25:34.781362] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:43.055 [2024-11-28 16:25:34.782702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.315 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:43.315 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:43.315 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.315 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:43.315 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:43.315 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.315 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.315 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.315 16:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.315 16:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.315 16:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.315 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.315 "name": "raid_bdev1", 00:12:43.315 "uuid": "2e4a4222-5146-4629-9708-7568ebbcc8d7", 00:12:43.315 "strip_size_kb": 0, 00:12:43.315 "state": "online", 00:12:43.315 "raid_level": "raid1", 00:12:43.315 "superblock": false, 00:12:43.315 "num_base_bdevs": 4, 00:12:43.315 "num_base_bdevs_discovered": 3, 00:12:43.315 "num_base_bdevs_operational": 3, 00:12:43.315 "base_bdevs_list": [ 00:12:43.315 { 00:12:43.315 "name": "spare", 00:12:43.315 "uuid": "86623e98-dad3-52bb-a25b-e7cbed747183", 00:12:43.315 "is_configured": true, 00:12:43.315 "data_offset": 0, 00:12:43.315 "data_size": 65536 00:12:43.315 }, 00:12:43.315 { 00:12:43.315 "name": null, 00:12:43.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.315 "is_configured": false, 00:12:43.315 "data_offset": 0, 00:12:43.315 "data_size": 65536 00:12:43.315 }, 00:12:43.315 { 00:12:43.315 "name": "BaseBdev3", 00:12:43.315 "uuid": "924520e9-0a51-5e9b-9788-cebe05dd8030", 00:12:43.315 "is_configured": true, 00:12:43.315 "data_offset": 0, 00:12:43.315 "data_size": 65536 00:12:43.315 }, 00:12:43.315 { 00:12:43.315 "name": "BaseBdev4", 00:12:43.315 "uuid": "6cfc04e1-988a-5e6e-ac28-85ad6754efc6", 00:12:43.315 "is_configured": true, 00:12:43.315 "data_offset": 0, 00:12:43.315 "data_size": 65536 00:12:43.315 } 00:12:43.315 ] 00:12:43.315 }' 00:12:43.315 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.575 "name": "raid_bdev1", 00:12:43.575 "uuid": "2e4a4222-5146-4629-9708-7568ebbcc8d7", 00:12:43.575 "strip_size_kb": 0, 00:12:43.575 "state": "online", 00:12:43.575 "raid_level": "raid1", 00:12:43.575 "superblock": false, 00:12:43.575 "num_base_bdevs": 4, 00:12:43.575 "num_base_bdevs_discovered": 3, 00:12:43.575 "num_base_bdevs_operational": 3, 00:12:43.575 "base_bdevs_list": [ 00:12:43.575 { 00:12:43.575 "name": "spare", 00:12:43.575 "uuid": "86623e98-dad3-52bb-a25b-e7cbed747183", 00:12:43.575 "is_configured": true, 00:12:43.575 "data_offset": 0, 00:12:43.575 "data_size": 65536 00:12:43.575 }, 00:12:43.575 { 00:12:43.575 "name": null, 00:12:43.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.575 "is_configured": false, 00:12:43.575 "data_offset": 0, 00:12:43.575 "data_size": 65536 00:12:43.575 }, 00:12:43.575 { 00:12:43.575 "name": "BaseBdev3", 00:12:43.575 "uuid": "924520e9-0a51-5e9b-9788-cebe05dd8030", 00:12:43.575 "is_configured": true, 00:12:43.575 "data_offset": 0, 00:12:43.575 "data_size": 65536 00:12:43.575 }, 00:12:43.575 { 00:12:43.575 "name": "BaseBdev4", 00:12:43.575 "uuid": "6cfc04e1-988a-5e6e-ac28-85ad6754efc6", 00:12:43.575 "is_configured": true, 00:12:43.575 "data_offset": 0, 00:12:43.575 "data_size": 65536 00:12:43.575 } 00:12:43.575 ] 00:12:43.575 }' 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.575 "name": "raid_bdev1", 00:12:43.575 "uuid": "2e4a4222-5146-4629-9708-7568ebbcc8d7", 00:12:43.575 "strip_size_kb": 0, 00:12:43.575 "state": "online", 00:12:43.575 "raid_level": "raid1", 00:12:43.575 "superblock": false, 00:12:43.575 "num_base_bdevs": 4, 00:12:43.575 "num_base_bdevs_discovered": 3, 00:12:43.575 "num_base_bdevs_operational": 3, 00:12:43.575 "base_bdevs_list": [ 00:12:43.575 { 00:12:43.575 "name": "spare", 00:12:43.575 "uuid": "86623e98-dad3-52bb-a25b-e7cbed747183", 00:12:43.575 "is_configured": true, 00:12:43.575 "data_offset": 0, 00:12:43.575 "data_size": 65536 00:12:43.575 }, 00:12:43.575 { 00:12:43.575 "name": null, 00:12:43.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.575 "is_configured": false, 00:12:43.575 "data_offset": 0, 00:12:43.575 "data_size": 65536 00:12:43.575 }, 00:12:43.575 { 00:12:43.575 "name": "BaseBdev3", 00:12:43.575 "uuid": "924520e9-0a51-5e9b-9788-cebe05dd8030", 00:12:43.575 "is_configured": true, 00:12:43.575 "data_offset": 0, 00:12:43.575 "data_size": 65536 00:12:43.575 }, 00:12:43.575 { 00:12:43.575 "name": "BaseBdev4", 00:12:43.575 "uuid": "6cfc04e1-988a-5e6e-ac28-85ad6754efc6", 00:12:43.575 "is_configured": true, 00:12:43.575 "data_offset": 0, 00:12:43.575 "data_size": 65536 00:12:43.575 } 00:12:43.575 ] 00:12:43.575 }' 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.575 16:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.145 96.50 IOPS, 289.50 MiB/s [2024-11-28T16:25:35.916Z] 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:44.145 16:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.145 16:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.145 [2024-11-28 16:25:35.753507] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:44.145 [2024-11-28 16:25:35.753550] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:44.145 00:12:44.145 Latency(us) 00:12:44.145 [2024-11-28T16:25:35.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:44.145 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:44.145 raid_bdev1 : 8.14 95.33 285.99 0.00 0.00 14573.62 293.34 110352.32 00:12:44.145 [2024-11-28T16:25:35.916Z] =================================================================================================================== 00:12:44.145 [2024-11-28T16:25:35.916Z] Total : 95.33 285.99 0.00 0.00 14573.62 293.34 110352.32 00:12:44.145 [2024-11-28 16:25:35.856320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.145 [2024-11-28 16:25:35.856363] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:44.145 [2024-11-28 16:25:35.856462] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:44.145 [2024-11-28 16:25:35.856482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:44.145 { 00:12:44.145 "results": [ 00:12:44.145 { 00:12:44.145 "job": "raid_bdev1", 00:12:44.145 "core_mask": "0x1", 00:12:44.145 "workload": "randrw", 00:12:44.145 "percentage": 50, 00:12:44.145 "status": "finished", 00:12:44.145 "queue_depth": 2, 00:12:44.145 "io_size": 3145728, 00:12:44.145 "runtime": 8.140216, 00:12:44.145 "iops": 95.32916571255603, 00:12:44.145 "mibps": 285.98749713766813, 00:12:44.145 "io_failed": 0, 00:12:44.145 "io_timeout": 0, 00:12:44.145 "avg_latency_us": 14573.62258137127, 00:12:44.145 "min_latency_us": 293.3379912663755, 00:12:44.145 "max_latency_us": 110352.32139737991 00:12:44.145 } 00:12:44.145 ], 00:12:44.145 "core_count": 1 00:12:44.145 } 00:12:44.145 16:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.145 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.145 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:44.145 16:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.145 16:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.145 16:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.145 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:44.145 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:44.145 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:44.145 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:44.145 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:44.145 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:44.145 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:44.145 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:44.145 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:44.145 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:44.145 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:44.145 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:44.145 16:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:44.405 /dev/nbd0 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.405 1+0 records in 00:12:44.405 1+0 records out 00:12:44.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388875 s, 10.5 MB/s 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:44.405 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:44.665 /dev/nbd1 00:12:44.665 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:44.665 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:44.665 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:44.665 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:44.665 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:44.665 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:44.665 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:44.665 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:44.665 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:44.665 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:44.665 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.665 1+0 records in 00:12:44.665 1+0 records out 00:12:44.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386412 s, 10.6 MB/s 00:12:44.665 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.665 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:44.665 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.665 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:44.666 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:44.666 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.666 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:44.666 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:44.925 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:45.185 /dev/nbd1 00:12:45.185 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:45.185 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:45.185 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:45.185 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:45.185 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:45.185 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:45.185 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:45.186 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:45.186 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:45.186 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:45.186 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.186 1+0 records in 00:12:45.186 1+0 records out 00:12:45.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310116 s, 13.2 MB/s 00:12:45.186 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.186 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:45.186 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.186 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:45.186 16:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:45.186 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:45.186 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:45.186 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:45.446 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:45.446 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:45.446 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:45.446 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:45.446 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:45.446 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.446 16:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:45.446 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:45.446 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:45.446 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:45.446 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.446 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.446 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:45.446 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:45.446 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.446 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:45.446 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:45.446 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:45.446 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:45.446 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:45.446 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.446 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:45.705 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:45.706 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:45.706 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:45.706 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.706 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.706 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:45.706 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:45.706 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.706 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:45.706 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89356 00:12:45.706 16:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 89356 ']' 00:12:45.706 16:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 89356 00:12:45.706 16:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:12:45.706 16:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:45.706 16:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89356 00:12:45.706 16:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:45.706 16:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:45.706 killing process with pid 89356 00:12:45.706 16:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89356' 00:12:45.706 16:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 89356 00:12:45.706 Received shutdown signal, test time was about 9.718709 seconds 00:12:45.706 00:12:45.706 Latency(us) 00:12:45.706 [2024-11-28T16:25:37.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.706 [2024-11-28T16:25:37.477Z] =================================================================================================================== 00:12:45.706 [2024-11-28T16:25:37.477Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:45.706 [2024-11-28 16:25:37.428715] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:45.706 16:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 89356 00:12:45.966 [2024-11-28 16:25:37.475522] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:45.966 16:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:45.966 00:12:45.966 real 0m11.715s 00:12:45.966 user 0m15.122s 00:12:45.966 sys 0m1.722s 00:12:45.966 16:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:45.966 16:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.966 ************************************ 00:12:45.966 END TEST raid_rebuild_test_io 00:12:45.966 ************************************ 00:12:46.226 16:25:37 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:12:46.226 16:25:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:46.226 16:25:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:46.226 16:25:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:46.226 ************************************ 00:12:46.226 START TEST raid_rebuild_test_sb_io 00:12:46.226 ************************************ 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89754 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89754 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 89754 ']' 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:46.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:46.226 16:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.226 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:46.226 Zero copy mechanism will not be used. 00:12:46.226 [2024-11-28 16:25:37.857859] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:46.226 [2024-11-28 16:25:37.857986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89754 ] 00:12:46.487 [2024-11-28 16:25:38.015245] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.487 [2024-11-28 16:25:38.060306] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.487 [2024-11-28 16:25:38.103542] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.487 [2024-11-28 16:25:38.103599] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.057 BaseBdev1_malloc 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.057 [2024-11-28 16:25:38.710268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:47.057 [2024-11-28 16:25:38.710336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.057 [2024-11-28 16:25:38.710361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:47.057 [2024-11-28 16:25:38.710374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.057 [2024-11-28 16:25:38.712433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.057 [2024-11-28 16:25:38.712467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:47.057 BaseBdev1 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.057 BaseBdev2_malloc 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.057 [2024-11-28 16:25:38.754689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:47.057 [2024-11-28 16:25:38.754809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.057 [2024-11-28 16:25:38.754904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:47.057 [2024-11-28 16:25:38.754932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.057 [2024-11-28 16:25:38.759171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.057 [2024-11-28 16:25:38.759232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:47.057 BaseBdev2 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.057 BaseBdev3_malloc 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.057 [2024-11-28 16:25:38.785234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:47.057 [2024-11-28 16:25:38.785279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.057 [2024-11-28 16:25:38.785303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:47.057 [2024-11-28 16:25:38.785311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.057 [2024-11-28 16:25:38.787319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.057 [2024-11-28 16:25:38.787351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:47.057 BaseBdev3 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.057 BaseBdev4_malloc 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.057 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.057 [2024-11-28 16:25:38.813879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:47.057 [2024-11-28 16:25:38.813940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.057 [2024-11-28 16:25:38.813963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:47.058 [2024-11-28 16:25:38.813971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.058 [2024-11-28 16:25:38.815989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.058 [2024-11-28 16:25:38.816066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:47.058 BaseBdev4 00:12:47.058 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.058 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:47.058 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.058 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.318 spare_malloc 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.318 spare_delay 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.318 [2024-11-28 16:25:38.854451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:47.318 [2024-11-28 16:25:38.854500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.318 [2024-11-28 16:25:38.854535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:47.318 [2024-11-28 16:25:38.854543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.318 [2024-11-28 16:25:38.856580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.318 [2024-11-28 16:25:38.856626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:47.318 spare 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.318 [2024-11-28 16:25:38.866514] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:47.318 [2024-11-28 16:25:38.868334] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:47.318 [2024-11-28 16:25:38.868403] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:47.318 [2024-11-28 16:25:38.868444] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:47.318 [2024-11-28 16:25:38.868600] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:47.318 [2024-11-28 16:25:38.868610] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:47.318 [2024-11-28 16:25:38.868867] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:47.318 [2024-11-28 16:25:38.869013] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:47.318 [2024-11-28 16:25:38.869026] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:47.318 [2024-11-28 16:25:38.869149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.318 "name": "raid_bdev1", 00:12:47.318 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:12:47.318 "strip_size_kb": 0, 00:12:47.318 "state": "online", 00:12:47.318 "raid_level": "raid1", 00:12:47.318 "superblock": true, 00:12:47.318 "num_base_bdevs": 4, 00:12:47.318 "num_base_bdevs_discovered": 4, 00:12:47.318 "num_base_bdevs_operational": 4, 00:12:47.318 "base_bdevs_list": [ 00:12:47.318 { 00:12:47.318 "name": "BaseBdev1", 00:12:47.318 "uuid": "93296968-c0ec-52e0-a723-d8297e2d8ae3", 00:12:47.318 "is_configured": true, 00:12:47.318 "data_offset": 2048, 00:12:47.318 "data_size": 63488 00:12:47.318 }, 00:12:47.318 { 00:12:47.318 "name": "BaseBdev2", 00:12:47.318 "uuid": "0c4d1a24-806b-598f-a431-866d28e7be1a", 00:12:47.318 "is_configured": true, 00:12:47.318 "data_offset": 2048, 00:12:47.318 "data_size": 63488 00:12:47.318 }, 00:12:47.318 { 00:12:47.318 "name": "BaseBdev3", 00:12:47.318 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:12:47.318 "is_configured": true, 00:12:47.318 "data_offset": 2048, 00:12:47.318 "data_size": 63488 00:12:47.318 }, 00:12:47.318 { 00:12:47.318 "name": "BaseBdev4", 00:12:47.318 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:12:47.318 "is_configured": true, 00:12:47.318 "data_offset": 2048, 00:12:47.318 "data_size": 63488 00:12:47.318 } 00:12:47.318 ] 00:12:47.318 }' 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.318 16:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.579 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:47.579 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:47.579 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.579 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.579 [2024-11-28 16:25:39.286082] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:47.579 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.579 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:47.579 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.579 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:47.579 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.579 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.579 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.839 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:47.839 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:47.839 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:47.839 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:47.839 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.839 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.839 [2024-11-28 16:25:39.369594] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:47.839 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.839 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:47.839 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.839 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.839 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.839 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.839 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.839 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.839 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.839 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.839 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.839 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.839 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.839 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.839 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.839 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.839 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.839 "name": "raid_bdev1", 00:12:47.839 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:12:47.839 "strip_size_kb": 0, 00:12:47.839 "state": "online", 00:12:47.839 "raid_level": "raid1", 00:12:47.839 "superblock": true, 00:12:47.839 "num_base_bdevs": 4, 00:12:47.839 "num_base_bdevs_discovered": 3, 00:12:47.839 "num_base_bdevs_operational": 3, 00:12:47.839 "base_bdevs_list": [ 00:12:47.839 { 00:12:47.839 "name": null, 00:12:47.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.839 "is_configured": false, 00:12:47.839 "data_offset": 0, 00:12:47.839 "data_size": 63488 00:12:47.839 }, 00:12:47.839 { 00:12:47.839 "name": "BaseBdev2", 00:12:47.839 "uuid": "0c4d1a24-806b-598f-a431-866d28e7be1a", 00:12:47.839 "is_configured": true, 00:12:47.839 "data_offset": 2048, 00:12:47.839 "data_size": 63488 00:12:47.839 }, 00:12:47.839 { 00:12:47.839 "name": "BaseBdev3", 00:12:47.840 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:12:47.840 "is_configured": true, 00:12:47.840 "data_offset": 2048, 00:12:47.840 "data_size": 63488 00:12:47.840 }, 00:12:47.840 { 00:12:47.840 "name": "BaseBdev4", 00:12:47.840 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:12:47.840 "is_configured": true, 00:12:47.840 "data_offset": 2048, 00:12:47.840 "data_size": 63488 00:12:47.840 } 00:12:47.840 ] 00:12:47.840 }' 00:12:47.840 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.840 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.840 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:47.840 Zero copy mechanism will not be used. 00:12:47.840 Running I/O for 60 seconds... 00:12:47.840 [2024-11-28 16:25:39.463460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:48.100 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:48.100 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.100 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.100 [2024-11-28 16:25:39.784488] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:48.100 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.100 16:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:48.100 [2024-11-28 16:25:39.833044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:48.100 [2024-11-28 16:25:39.835068] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:48.359 [2024-11-28 16:25:39.956107] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:48.619 [2024-11-28 16:25:40.173204] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:48.619 [2024-11-28 16:25:40.173495] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:49.139 202.00 IOPS, 606.00 MiB/s [2024-11-28T16:25:40.910Z] 16:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.139 16:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.139 16:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.139 16:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.139 16:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.139 16:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.139 16:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.139 16:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.139 16:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.139 16:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.139 16:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.139 "name": "raid_bdev1", 00:12:49.139 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:12:49.139 "strip_size_kb": 0, 00:12:49.139 "state": "online", 00:12:49.139 "raid_level": "raid1", 00:12:49.139 "superblock": true, 00:12:49.139 "num_base_bdevs": 4, 00:12:49.139 "num_base_bdevs_discovered": 4, 00:12:49.139 "num_base_bdevs_operational": 4, 00:12:49.139 "process": { 00:12:49.139 "type": "rebuild", 00:12:49.139 "target": "spare", 00:12:49.139 "progress": { 00:12:49.139 "blocks": 12288, 00:12:49.139 "percent": 19 00:12:49.139 } 00:12:49.139 }, 00:12:49.139 "base_bdevs_list": [ 00:12:49.139 { 00:12:49.139 "name": "spare", 00:12:49.139 "uuid": "7542b234-36dc-551f-a31f-bf761e7a46c0", 00:12:49.139 "is_configured": true, 00:12:49.139 "data_offset": 2048, 00:12:49.139 "data_size": 63488 00:12:49.139 }, 00:12:49.139 { 00:12:49.139 "name": "BaseBdev2", 00:12:49.139 "uuid": "0c4d1a24-806b-598f-a431-866d28e7be1a", 00:12:49.139 "is_configured": true, 00:12:49.139 "data_offset": 2048, 00:12:49.139 "data_size": 63488 00:12:49.139 }, 00:12:49.139 { 00:12:49.139 "name": "BaseBdev3", 00:12:49.139 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:12:49.139 "is_configured": true, 00:12:49.139 "data_offset": 2048, 00:12:49.139 "data_size": 63488 00:12:49.139 }, 00:12:49.139 { 00:12:49.139 "name": "BaseBdev4", 00:12:49.139 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:12:49.139 "is_configured": true, 00:12:49.139 "data_offset": 2048, 00:12:49.139 "data_size": 63488 00:12:49.139 } 00:12:49.139 ] 00:12:49.139 }' 00:12:49.139 16:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.399 16:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:49.399 16:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.399 16:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.399 16:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:49.399 16:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.399 16:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.399 [2024-11-28 16:25:40.972377] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:49.399 [2024-11-28 16:25:41.072101] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:49.399 [2024-11-28 16:25:41.072343] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:49.658 [2024-11-28 16:25:41.179705] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:49.658 [2024-11-28 16:25:41.182780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.658 [2024-11-28 16:25:41.182814] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:49.658 [2024-11-28 16:25:41.182828] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:49.658 [2024-11-28 16:25:41.205104] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:49.658 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.658 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:49.658 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.658 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.659 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.659 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.659 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.659 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.659 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.659 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.659 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.659 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.659 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.659 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.659 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.659 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.659 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.659 "name": "raid_bdev1", 00:12:49.659 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:12:49.659 "strip_size_kb": 0, 00:12:49.659 "state": "online", 00:12:49.659 "raid_level": "raid1", 00:12:49.659 "superblock": true, 00:12:49.659 "num_base_bdevs": 4, 00:12:49.659 "num_base_bdevs_discovered": 3, 00:12:49.659 "num_base_bdevs_operational": 3, 00:12:49.659 "base_bdevs_list": [ 00:12:49.659 { 00:12:49.659 "name": null, 00:12:49.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.659 "is_configured": false, 00:12:49.659 "data_offset": 0, 00:12:49.659 "data_size": 63488 00:12:49.659 }, 00:12:49.659 { 00:12:49.659 "name": "BaseBdev2", 00:12:49.659 "uuid": "0c4d1a24-806b-598f-a431-866d28e7be1a", 00:12:49.659 "is_configured": true, 00:12:49.659 "data_offset": 2048, 00:12:49.659 "data_size": 63488 00:12:49.659 }, 00:12:49.659 { 00:12:49.659 "name": "BaseBdev3", 00:12:49.659 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:12:49.659 "is_configured": true, 00:12:49.659 "data_offset": 2048, 00:12:49.659 "data_size": 63488 00:12:49.659 }, 00:12:49.659 { 00:12:49.659 "name": "BaseBdev4", 00:12:49.659 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:12:49.659 "is_configured": true, 00:12:49.659 "data_offset": 2048, 00:12:49.659 "data_size": 63488 00:12:49.659 } 00:12:49.659 ] 00:12:49.659 }' 00:12:49.659 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.659 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.918 184.50 IOPS, 553.50 MiB/s [2024-11-28T16:25:41.689Z] 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:49.918 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.918 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:49.918 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:49.918 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.918 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.918 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.918 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.918 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.919 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.919 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.919 "name": "raid_bdev1", 00:12:49.919 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:12:49.919 "strip_size_kb": 0, 00:12:49.919 "state": "online", 00:12:49.919 "raid_level": "raid1", 00:12:49.919 "superblock": true, 00:12:49.919 "num_base_bdevs": 4, 00:12:49.919 "num_base_bdevs_discovered": 3, 00:12:49.919 "num_base_bdevs_operational": 3, 00:12:49.919 "base_bdevs_list": [ 00:12:49.919 { 00:12:49.919 "name": null, 00:12:49.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.919 "is_configured": false, 00:12:49.919 "data_offset": 0, 00:12:49.919 "data_size": 63488 00:12:49.919 }, 00:12:49.919 { 00:12:49.919 "name": "BaseBdev2", 00:12:49.919 "uuid": "0c4d1a24-806b-598f-a431-866d28e7be1a", 00:12:49.919 "is_configured": true, 00:12:49.919 "data_offset": 2048, 00:12:49.919 "data_size": 63488 00:12:49.919 }, 00:12:49.919 { 00:12:49.919 "name": "BaseBdev3", 00:12:49.919 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:12:49.919 "is_configured": true, 00:12:49.919 "data_offset": 2048, 00:12:49.919 "data_size": 63488 00:12:49.919 }, 00:12:49.919 { 00:12:49.919 "name": "BaseBdev4", 00:12:49.919 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:12:49.919 "is_configured": true, 00:12:49.919 "data_offset": 2048, 00:12:49.919 "data_size": 63488 00:12:49.919 } 00:12:49.919 ] 00:12:49.919 }' 00:12:49.919 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.179 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:50.179 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.179 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:50.179 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:50.179 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.179 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.179 [2024-11-28 16:25:41.781904] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:50.179 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.179 16:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:50.179 [2024-11-28 16:25:41.831475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:50.179 [2024-11-28 16:25:41.833504] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:50.439 [2024-11-28 16:25:41.949101] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:50.439 [2024-11-28 16:25:41.950417] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:50.439 [2024-11-28 16:25:42.192781] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:50.439 [2024-11-28 16:25:42.193547] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:51.008 171.67 IOPS, 515.00 MiB/s [2024-11-28T16:25:42.779Z] [2024-11-28 16:25:42.519619] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:51.008 [2024-11-28 16:25:42.520800] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:51.008 [2024-11-28 16:25:42.737518] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:51.008 [2024-11-28 16:25:42.737934] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:51.268 16:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:51.268 16:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.268 16:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:51.268 16:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:51.268 16:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.268 16:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.268 16:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.268 16:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.268 16:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.268 16:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.268 16:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.268 "name": "raid_bdev1", 00:12:51.268 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:12:51.268 "strip_size_kb": 0, 00:12:51.268 "state": "online", 00:12:51.268 "raid_level": "raid1", 00:12:51.268 "superblock": true, 00:12:51.268 "num_base_bdevs": 4, 00:12:51.268 "num_base_bdevs_discovered": 4, 00:12:51.268 "num_base_bdevs_operational": 4, 00:12:51.268 "process": { 00:12:51.268 "type": "rebuild", 00:12:51.268 "target": "spare", 00:12:51.268 "progress": { 00:12:51.268 "blocks": 10240, 00:12:51.268 "percent": 16 00:12:51.268 } 00:12:51.268 }, 00:12:51.268 "base_bdevs_list": [ 00:12:51.268 { 00:12:51.268 "name": "spare", 00:12:51.268 "uuid": "7542b234-36dc-551f-a31f-bf761e7a46c0", 00:12:51.268 "is_configured": true, 00:12:51.268 "data_offset": 2048, 00:12:51.268 "data_size": 63488 00:12:51.268 }, 00:12:51.268 { 00:12:51.268 "name": "BaseBdev2", 00:12:51.268 "uuid": "0c4d1a24-806b-598f-a431-866d28e7be1a", 00:12:51.268 "is_configured": true, 00:12:51.268 "data_offset": 2048, 00:12:51.268 "data_size": 63488 00:12:51.268 }, 00:12:51.268 { 00:12:51.268 "name": "BaseBdev3", 00:12:51.268 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:12:51.268 "is_configured": true, 00:12:51.268 "data_offset": 2048, 00:12:51.268 "data_size": 63488 00:12:51.268 }, 00:12:51.268 { 00:12:51.268 "name": "BaseBdev4", 00:12:51.268 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:12:51.268 "is_configured": true, 00:12:51.268 "data_offset": 2048, 00:12:51.268 "data_size": 63488 00:12:51.268 } 00:12:51.268 ] 00:12:51.268 }' 00:12:51.268 16:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.268 16:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:51.268 16:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.268 16:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:51.268 16:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:51.268 16:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:51.268 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:51.268 16:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:51.268 16:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:51.268 16:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:51.268 16:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:51.268 16:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.268 16:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.268 [2024-11-28 16:25:42.920298] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:51.528 [2024-11-28 16:25:43.157264] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:12:51.528 [2024-11-28 16:25:43.157380] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:12:51.528 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.528 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:51.528 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:51.528 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:51.528 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.528 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:51.528 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:51.528 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.528 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.528 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.528 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.528 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.528 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.528 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.528 "name": "raid_bdev1", 00:12:51.528 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:12:51.528 "strip_size_kb": 0, 00:12:51.528 "state": "online", 00:12:51.528 "raid_level": "raid1", 00:12:51.528 "superblock": true, 00:12:51.528 "num_base_bdevs": 4, 00:12:51.528 "num_base_bdevs_discovered": 3, 00:12:51.528 "num_base_bdevs_operational": 3, 00:12:51.528 "process": { 00:12:51.528 "type": "rebuild", 00:12:51.528 "target": "spare", 00:12:51.528 "progress": { 00:12:51.528 "blocks": 12288, 00:12:51.528 "percent": 19 00:12:51.528 } 00:12:51.528 }, 00:12:51.528 "base_bdevs_list": [ 00:12:51.528 { 00:12:51.528 "name": "spare", 00:12:51.528 "uuid": "7542b234-36dc-551f-a31f-bf761e7a46c0", 00:12:51.528 "is_configured": true, 00:12:51.528 "data_offset": 2048, 00:12:51.528 "data_size": 63488 00:12:51.528 }, 00:12:51.528 { 00:12:51.528 "name": null, 00:12:51.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.528 "is_configured": false, 00:12:51.528 "data_offset": 0, 00:12:51.528 "data_size": 63488 00:12:51.528 }, 00:12:51.528 { 00:12:51.528 "name": "BaseBdev3", 00:12:51.528 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:12:51.528 "is_configured": true, 00:12:51.528 "data_offset": 2048, 00:12:51.528 "data_size": 63488 00:12:51.528 }, 00:12:51.528 { 00:12:51.528 "name": "BaseBdev4", 00:12:51.528 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:12:51.528 "is_configured": true, 00:12:51.528 "data_offset": 2048, 00:12:51.528 "data_size": 63488 00:12:51.528 } 00:12:51.528 ] 00:12:51.528 }' 00:12:51.528 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.528 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:51.528 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.528 [2024-11-28 16:25:43.275257] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:51.528 [2024-11-28 16:25:43.275843] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:51.788 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:51.788 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=402 00:12:51.788 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:51.788 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:51.788 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.788 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:51.788 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:51.788 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.788 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.788 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.788 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.788 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.788 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.788 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.788 "name": "raid_bdev1", 00:12:51.788 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:12:51.788 "strip_size_kb": 0, 00:12:51.788 "state": "online", 00:12:51.788 "raid_level": "raid1", 00:12:51.788 "superblock": true, 00:12:51.788 "num_base_bdevs": 4, 00:12:51.788 "num_base_bdevs_discovered": 3, 00:12:51.788 "num_base_bdevs_operational": 3, 00:12:51.788 "process": { 00:12:51.788 "type": "rebuild", 00:12:51.788 "target": "spare", 00:12:51.788 "progress": { 00:12:51.788 "blocks": 14336, 00:12:51.788 "percent": 22 00:12:51.788 } 00:12:51.788 }, 00:12:51.788 "base_bdevs_list": [ 00:12:51.788 { 00:12:51.788 "name": "spare", 00:12:51.788 "uuid": "7542b234-36dc-551f-a31f-bf761e7a46c0", 00:12:51.788 "is_configured": true, 00:12:51.788 "data_offset": 2048, 00:12:51.788 "data_size": 63488 00:12:51.788 }, 00:12:51.788 { 00:12:51.788 "name": null, 00:12:51.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.788 "is_configured": false, 00:12:51.788 "data_offset": 0, 00:12:51.788 "data_size": 63488 00:12:51.788 }, 00:12:51.788 { 00:12:51.788 "name": "BaseBdev3", 00:12:51.788 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:12:51.788 "is_configured": true, 00:12:51.788 "data_offset": 2048, 00:12:51.788 "data_size": 63488 00:12:51.788 }, 00:12:51.788 { 00:12:51.788 "name": "BaseBdev4", 00:12:51.788 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:12:51.788 "is_configured": true, 00:12:51.788 "data_offset": 2048, 00:12:51.788 "data_size": 63488 00:12:51.788 } 00:12:51.788 ] 00:12:51.788 }' 00:12:51.788 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.788 [2024-11-28 16:25:43.412566] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:51.788 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:51.788 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.788 141.75 IOPS, 425.25 MiB/s [2024-11-28T16:25:43.559Z] 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:51.788 16:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:52.048 [2024-11-28 16:25:43.759812] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:52.618 [2024-11-28 16:25:44.321355] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:52.878 [2024-11-28 16:25:44.445944] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:52.878 126.20 IOPS, 378.60 MiB/s [2024-11-28T16:25:44.649Z] 16:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:52.878 16:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.878 16:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.878 16:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.878 16:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.878 16:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.878 16:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.878 16:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.878 16:25:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.878 16:25:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.878 16:25:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.879 16:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.879 "name": "raid_bdev1", 00:12:52.879 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:12:52.879 "strip_size_kb": 0, 00:12:52.879 "state": "online", 00:12:52.879 "raid_level": "raid1", 00:12:52.879 "superblock": true, 00:12:52.879 "num_base_bdevs": 4, 00:12:52.879 "num_base_bdevs_discovered": 3, 00:12:52.879 "num_base_bdevs_operational": 3, 00:12:52.879 "process": { 00:12:52.879 "type": "rebuild", 00:12:52.879 "target": "spare", 00:12:52.879 "progress": { 00:12:52.879 "blocks": 34816, 00:12:52.879 "percent": 54 00:12:52.879 } 00:12:52.879 }, 00:12:52.879 "base_bdevs_list": [ 00:12:52.879 { 00:12:52.879 "name": "spare", 00:12:52.879 "uuid": "7542b234-36dc-551f-a31f-bf761e7a46c0", 00:12:52.879 "is_configured": true, 00:12:52.879 "data_offset": 2048, 00:12:52.879 "data_size": 63488 00:12:52.879 }, 00:12:52.879 { 00:12:52.879 "name": null, 00:12:52.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.879 "is_configured": false, 00:12:52.879 "data_offset": 0, 00:12:52.879 "data_size": 63488 00:12:52.879 }, 00:12:52.879 { 00:12:52.879 "name": "BaseBdev3", 00:12:52.879 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:12:52.879 "is_configured": true, 00:12:52.879 "data_offset": 2048, 00:12:52.879 "data_size": 63488 00:12:52.879 }, 00:12:52.879 { 00:12:52.879 "name": "BaseBdev4", 00:12:52.879 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:12:52.879 "is_configured": true, 00:12:52.879 "data_offset": 2048, 00:12:52.879 "data_size": 63488 00:12:52.879 } 00:12:52.879 ] 00:12:52.879 }' 00:12:52.879 16:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.879 16:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.879 16:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.879 16:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.879 16:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:53.139 [2024-11-28 16:25:44.887898] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:53.713 [2024-11-28 16:25:45.218750] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:53.973 110.67 IOPS, 332.00 MiB/s [2024-11-28T16:25:45.744Z] [2024-11-28 16:25:45.533169] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:53.973 16:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:53.973 16:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.973 16:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.973 16:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.973 16:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.974 16:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.974 16:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.974 16:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.974 16:25:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.974 16:25:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.974 16:25:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.974 16:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.974 "name": "raid_bdev1", 00:12:53.974 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:12:53.974 "strip_size_kb": 0, 00:12:53.974 "state": "online", 00:12:53.974 "raid_level": "raid1", 00:12:53.974 "superblock": true, 00:12:53.974 "num_base_bdevs": 4, 00:12:53.974 "num_base_bdevs_discovered": 3, 00:12:53.974 "num_base_bdevs_operational": 3, 00:12:53.974 "process": { 00:12:53.974 "type": "rebuild", 00:12:53.974 "target": "spare", 00:12:53.974 "progress": { 00:12:53.974 "blocks": 51200, 00:12:53.974 "percent": 80 00:12:53.974 } 00:12:53.974 }, 00:12:53.974 "base_bdevs_list": [ 00:12:53.974 { 00:12:53.974 "name": "spare", 00:12:53.974 "uuid": "7542b234-36dc-551f-a31f-bf761e7a46c0", 00:12:53.974 "is_configured": true, 00:12:53.974 "data_offset": 2048, 00:12:53.974 "data_size": 63488 00:12:53.974 }, 00:12:53.974 { 00:12:53.974 "name": null, 00:12:53.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.974 "is_configured": false, 00:12:53.974 "data_offset": 0, 00:12:53.974 "data_size": 63488 00:12:53.974 }, 00:12:53.974 { 00:12:53.974 "name": "BaseBdev3", 00:12:53.974 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:12:53.974 "is_configured": true, 00:12:53.974 "data_offset": 2048, 00:12:53.974 "data_size": 63488 00:12:53.974 }, 00:12:53.974 { 00:12:53.974 "name": "BaseBdev4", 00:12:53.974 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:12:53.974 "is_configured": true, 00:12:53.974 "data_offset": 2048, 00:12:53.974 "data_size": 63488 00:12:53.974 } 00:12:53.974 ] 00:12:53.974 }' 00:12:53.974 16:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.974 16:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:53.974 16:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.234 16:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.234 16:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:54.234 [2024-11-28 16:25:45.974274] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:54.494 [2024-11-28 16:25:46.196247] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:54.754 [2024-11-28 16:25:46.296101] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:54.754 [2024-11-28 16:25:46.298306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.026 99.14 IOPS, 297.43 MiB/s [2024-11-28T16:25:46.797Z] 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:55.026 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.026 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.026 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.026 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.026 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.026 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.026 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.026 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.026 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.026 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.302 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.302 "name": "raid_bdev1", 00:12:55.302 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:12:55.302 "strip_size_kb": 0, 00:12:55.302 "state": "online", 00:12:55.302 "raid_level": "raid1", 00:12:55.302 "superblock": true, 00:12:55.302 "num_base_bdevs": 4, 00:12:55.302 "num_base_bdevs_discovered": 3, 00:12:55.302 "num_base_bdevs_operational": 3, 00:12:55.302 "base_bdevs_list": [ 00:12:55.302 { 00:12:55.302 "name": "spare", 00:12:55.302 "uuid": "7542b234-36dc-551f-a31f-bf761e7a46c0", 00:12:55.302 "is_configured": true, 00:12:55.302 "data_offset": 2048, 00:12:55.302 "data_size": 63488 00:12:55.302 }, 00:12:55.302 { 00:12:55.302 "name": null, 00:12:55.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.302 "is_configured": false, 00:12:55.302 "data_offset": 0, 00:12:55.302 "data_size": 63488 00:12:55.302 }, 00:12:55.302 { 00:12:55.302 "name": "BaseBdev3", 00:12:55.302 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:12:55.302 "is_configured": true, 00:12:55.302 "data_offset": 2048, 00:12:55.302 "data_size": 63488 00:12:55.302 }, 00:12:55.302 { 00:12:55.302 "name": "BaseBdev4", 00:12:55.302 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:12:55.302 "is_configured": true, 00:12:55.302 "data_offset": 2048, 00:12:55.302 "data_size": 63488 00:12:55.302 } 00:12:55.302 ] 00:12:55.302 }' 00:12:55.302 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.302 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:55.302 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.302 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:55.302 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:55.302 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:55.302 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.302 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:55.302 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:55.302 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.302 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.302 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.302 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.302 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.302 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.302 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.302 "name": "raid_bdev1", 00:12:55.302 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:12:55.302 "strip_size_kb": 0, 00:12:55.302 "state": "online", 00:12:55.302 "raid_level": "raid1", 00:12:55.302 "superblock": true, 00:12:55.302 "num_base_bdevs": 4, 00:12:55.302 "num_base_bdevs_discovered": 3, 00:12:55.302 "num_base_bdevs_operational": 3, 00:12:55.302 "base_bdevs_list": [ 00:12:55.302 { 00:12:55.302 "name": "spare", 00:12:55.302 "uuid": "7542b234-36dc-551f-a31f-bf761e7a46c0", 00:12:55.302 "is_configured": true, 00:12:55.302 "data_offset": 2048, 00:12:55.302 "data_size": 63488 00:12:55.302 }, 00:12:55.302 { 00:12:55.302 "name": null, 00:12:55.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.302 "is_configured": false, 00:12:55.302 "data_offset": 0, 00:12:55.302 "data_size": 63488 00:12:55.302 }, 00:12:55.302 { 00:12:55.302 "name": "BaseBdev3", 00:12:55.302 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:12:55.302 "is_configured": true, 00:12:55.302 "data_offset": 2048, 00:12:55.302 "data_size": 63488 00:12:55.302 }, 00:12:55.302 { 00:12:55.302 "name": "BaseBdev4", 00:12:55.302 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:12:55.302 "is_configured": true, 00:12:55.302 "data_offset": 2048, 00:12:55.302 "data_size": 63488 00:12:55.302 } 00:12:55.302 ] 00:12:55.302 }' 00:12:55.302 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.302 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:55.302 16:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.302 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:55.302 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:55.302 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.302 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.302 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.302 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.302 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.302 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.302 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.302 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.302 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.302 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.302 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.302 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.302 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.302 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.561 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.562 "name": "raid_bdev1", 00:12:55.562 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:12:55.562 "strip_size_kb": 0, 00:12:55.562 "state": "online", 00:12:55.562 "raid_level": "raid1", 00:12:55.562 "superblock": true, 00:12:55.562 "num_base_bdevs": 4, 00:12:55.562 "num_base_bdevs_discovered": 3, 00:12:55.562 "num_base_bdevs_operational": 3, 00:12:55.562 "base_bdevs_list": [ 00:12:55.562 { 00:12:55.562 "name": "spare", 00:12:55.562 "uuid": "7542b234-36dc-551f-a31f-bf761e7a46c0", 00:12:55.562 "is_configured": true, 00:12:55.562 "data_offset": 2048, 00:12:55.562 "data_size": 63488 00:12:55.562 }, 00:12:55.562 { 00:12:55.562 "name": null, 00:12:55.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.562 "is_configured": false, 00:12:55.562 "data_offset": 0, 00:12:55.562 "data_size": 63488 00:12:55.562 }, 00:12:55.562 { 00:12:55.562 "name": "BaseBdev3", 00:12:55.562 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:12:55.562 "is_configured": true, 00:12:55.562 "data_offset": 2048, 00:12:55.562 "data_size": 63488 00:12:55.562 }, 00:12:55.562 { 00:12:55.562 "name": "BaseBdev4", 00:12:55.562 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:12:55.562 "is_configured": true, 00:12:55.562 "data_offset": 2048, 00:12:55.562 "data_size": 63488 00:12:55.562 } 00:12:55.562 ] 00:12:55.562 }' 00:12:55.562 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.562 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.821 90.62 IOPS, 271.88 MiB/s [2024-11-28T16:25:47.592Z] 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:55.821 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.821 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.821 [2024-11-28 16:25:47.474386] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:55.821 [2024-11-28 16:25:47.474418] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.821 00:12:55.821 Latency(us) 00:12:55.821 [2024-11-28T16:25:47.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.821 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:55.821 raid_bdev1 : 8.12 89.61 268.82 0.00 0.00 15476.34 280.82 115847.04 00:12:55.821 [2024-11-28T16:25:47.592Z] =================================================================================================================== 00:12:55.821 [2024-11-28T16:25:47.592Z] Total : 89.61 268.82 0.00 0.00 15476.34 280.82 115847.04 00:12:55.821 { 00:12:55.821 "results": [ 00:12:55.821 { 00:12:55.821 "job": "raid_bdev1", 00:12:55.821 "core_mask": "0x1", 00:12:55.821 "workload": "randrw", 00:12:55.821 "percentage": 50, 00:12:55.821 "status": "finished", 00:12:55.821 "queue_depth": 2, 00:12:55.821 "io_size": 3145728, 00:12:55.821 "runtime": 8.124381, 00:12:55.821 "iops": 89.60682666162505, 00:12:55.821 "mibps": 268.82047998487513, 00:12:55.821 "io_failed": 0, 00:12:55.821 "io_timeout": 0, 00:12:55.821 "avg_latency_us": 15476.33571668506, 00:12:55.821 "min_latency_us": 280.8174672489083, 00:12:55.821 "max_latency_us": 115847.04279475982 00:12:55.821 } 00:12:55.821 ], 00:12:55.822 "core_count": 1 00:12:55.822 } 00:12:55.822 [2024-11-28 16:25:47.577378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.822 [2024-11-28 16:25:47.577430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.822 [2024-11-28 16:25:47.577526] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:55.822 [2024-11-28 16:25:47.577539] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:55.822 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.822 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.822 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:55.822 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.822 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.082 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.082 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:56.082 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:56.082 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:56.082 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:56.082 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:56.082 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:56.082 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:56.082 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:56.082 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:56.082 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:56.082 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:56.082 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:56.082 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:56.082 /dev/nbd0 00:12:56.082 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.343 1+0 records in 00:12:56.343 1+0 records out 00:12:56.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299991 s, 13.7 MB/s 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:56.343 16:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:56.343 /dev/nbd1 00:12:56.343 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:56.343 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:56.343 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:56.343 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:56.343 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:56.343 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:56.343 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:56.343 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:56.343 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:56.343 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:56.343 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.343 1+0 records in 00:12:56.343 1+0 records out 00:12:56.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556126 s, 7.4 MB/s 00:12:56.343 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.343 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:56.343 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.343 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:56.343 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:56.343 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:56.343 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:56.343 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:56.603 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:56.604 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:56.604 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:56.604 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:56.604 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:56.604 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.604 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:56.864 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:56.864 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:56.864 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:56.864 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.864 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.864 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:56.864 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:56.865 /dev/nbd1 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.865 1+0 records in 00:12:56.865 1+0 records out 00:12:56.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422304 s, 9.7 MB/s 00:12:56.865 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.125 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:57.125 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.125 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:57.125 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:57.125 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:57.125 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:57.125 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:57.125 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:57.125 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:57.125 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:57.125 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:57.125 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:57.125 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.125 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:57.385 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:57.385 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:57.385 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:57.385 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.385 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.385 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:57.385 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:57.385 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.385 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:57.385 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:57.385 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:57.385 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:57.385 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:57.385 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.385 16:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:57.385 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:57.385 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:57.385 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:57.385 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.385 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.385 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:57.385 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:57.385 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.385 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:57.385 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:57.385 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.385 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.385 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.385 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:57.385 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.385 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.385 [2024-11-28 16:25:49.145744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:57.386 [2024-11-28 16:25:49.145817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.386 [2024-11-28 16:25:49.145851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:57.386 [2024-11-28 16:25:49.145863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.386 [2024-11-28 16:25:49.148441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.386 [2024-11-28 16:25:49.148497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:57.386 [2024-11-28 16:25:49.148590] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:57.386 [2024-11-28 16:25:49.148643] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:57.386 [2024-11-28 16:25:49.148780] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:57.386 [2024-11-28 16:25:49.149002] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:57.386 spare 00:12:57.386 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.386 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:57.386 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.386 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.646 [2024-11-28 16:25:49.248928] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:12:57.646 [2024-11-28 16:25:49.248959] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:57.646 [2024-11-28 16:25:49.249279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000036fc0 00:12:57.647 [2024-11-28 16:25:49.249449] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:12:57.647 [2024-11-28 16:25:49.249466] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:12:57.647 [2024-11-28 16:25:49.249618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.647 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.647 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:57.647 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.647 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.647 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.647 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.647 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:57.647 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.647 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.647 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.647 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.647 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.647 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.647 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.647 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.647 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.647 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.647 "name": "raid_bdev1", 00:12:57.647 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:12:57.647 "strip_size_kb": 0, 00:12:57.647 "state": "online", 00:12:57.647 "raid_level": "raid1", 00:12:57.647 "superblock": true, 00:12:57.647 "num_base_bdevs": 4, 00:12:57.647 "num_base_bdevs_discovered": 3, 00:12:57.647 "num_base_bdevs_operational": 3, 00:12:57.647 "base_bdevs_list": [ 00:12:57.647 { 00:12:57.647 "name": "spare", 00:12:57.647 "uuid": "7542b234-36dc-551f-a31f-bf761e7a46c0", 00:12:57.647 "is_configured": true, 00:12:57.647 "data_offset": 2048, 00:12:57.647 "data_size": 63488 00:12:57.647 }, 00:12:57.647 { 00:12:57.647 "name": null, 00:12:57.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.647 "is_configured": false, 00:12:57.647 "data_offset": 2048, 00:12:57.647 "data_size": 63488 00:12:57.647 }, 00:12:57.647 { 00:12:57.647 "name": "BaseBdev3", 00:12:57.647 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:12:57.647 "is_configured": true, 00:12:57.647 "data_offset": 2048, 00:12:57.647 "data_size": 63488 00:12:57.647 }, 00:12:57.647 { 00:12:57.647 "name": "BaseBdev4", 00:12:57.647 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:12:57.647 "is_configured": true, 00:12:57.647 "data_offset": 2048, 00:12:57.647 "data_size": 63488 00:12:57.647 } 00:12:57.647 ] 00:12:57.647 }' 00:12:57.647 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.647 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.218 "name": "raid_bdev1", 00:12:58.218 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:12:58.218 "strip_size_kb": 0, 00:12:58.218 "state": "online", 00:12:58.218 "raid_level": "raid1", 00:12:58.218 "superblock": true, 00:12:58.218 "num_base_bdevs": 4, 00:12:58.218 "num_base_bdevs_discovered": 3, 00:12:58.218 "num_base_bdevs_operational": 3, 00:12:58.218 "base_bdevs_list": [ 00:12:58.218 { 00:12:58.218 "name": "spare", 00:12:58.218 "uuid": "7542b234-36dc-551f-a31f-bf761e7a46c0", 00:12:58.218 "is_configured": true, 00:12:58.218 "data_offset": 2048, 00:12:58.218 "data_size": 63488 00:12:58.218 }, 00:12:58.218 { 00:12:58.218 "name": null, 00:12:58.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.218 "is_configured": false, 00:12:58.218 "data_offset": 2048, 00:12:58.218 "data_size": 63488 00:12:58.218 }, 00:12:58.218 { 00:12:58.218 "name": "BaseBdev3", 00:12:58.218 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:12:58.218 "is_configured": true, 00:12:58.218 "data_offset": 2048, 00:12:58.218 "data_size": 63488 00:12:58.218 }, 00:12:58.218 { 00:12:58.218 "name": "BaseBdev4", 00:12:58.218 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:12:58.218 "is_configured": true, 00:12:58.218 "data_offset": 2048, 00:12:58.218 "data_size": 63488 00:12:58.218 } 00:12:58.218 ] 00:12:58.218 }' 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.218 [2024-11-28 16:25:49.896644] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:58.218 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.219 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.219 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.219 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.219 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.219 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.219 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.219 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.219 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.219 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.219 "name": "raid_bdev1", 00:12:58.219 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:12:58.219 "strip_size_kb": 0, 00:12:58.219 "state": "online", 00:12:58.219 "raid_level": "raid1", 00:12:58.219 "superblock": true, 00:12:58.219 "num_base_bdevs": 4, 00:12:58.219 "num_base_bdevs_discovered": 2, 00:12:58.219 "num_base_bdevs_operational": 2, 00:12:58.219 "base_bdevs_list": [ 00:12:58.219 { 00:12:58.219 "name": null, 00:12:58.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.219 "is_configured": false, 00:12:58.219 "data_offset": 0, 00:12:58.219 "data_size": 63488 00:12:58.219 }, 00:12:58.219 { 00:12:58.219 "name": null, 00:12:58.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.219 "is_configured": false, 00:12:58.219 "data_offset": 2048, 00:12:58.219 "data_size": 63488 00:12:58.219 }, 00:12:58.219 { 00:12:58.219 "name": "BaseBdev3", 00:12:58.219 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:12:58.219 "is_configured": true, 00:12:58.219 "data_offset": 2048, 00:12:58.219 "data_size": 63488 00:12:58.219 }, 00:12:58.219 { 00:12:58.219 "name": "BaseBdev4", 00:12:58.219 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:12:58.219 "is_configured": true, 00:12:58.219 "data_offset": 2048, 00:12:58.219 "data_size": 63488 00:12:58.219 } 00:12:58.219 ] 00:12:58.219 }' 00:12:58.219 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.219 16:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.788 16:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:58.788 16:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.788 16:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.788 [2024-11-28 16:25:50.323961] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:58.788 [2024-11-28 16:25:50.324154] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:58.788 [2024-11-28 16:25:50.324181] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:58.788 [2024-11-28 16:25:50.324220] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:58.788 [2024-11-28 16:25:50.327862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037090 00:12:58.788 [2024-11-28 16:25:50.329666] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:58.788 16:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.788 16:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:59.725 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:59.725 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.725 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:59.725 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:59.725 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.725 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.725 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.725 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.725 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.725 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.725 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.725 "name": "raid_bdev1", 00:12:59.725 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:12:59.725 "strip_size_kb": 0, 00:12:59.725 "state": "online", 00:12:59.725 "raid_level": "raid1", 00:12:59.725 "superblock": true, 00:12:59.725 "num_base_bdevs": 4, 00:12:59.725 "num_base_bdevs_discovered": 3, 00:12:59.725 "num_base_bdevs_operational": 3, 00:12:59.725 "process": { 00:12:59.725 "type": "rebuild", 00:12:59.725 "target": "spare", 00:12:59.725 "progress": { 00:12:59.725 "blocks": 20480, 00:12:59.725 "percent": 32 00:12:59.725 } 00:12:59.725 }, 00:12:59.725 "base_bdevs_list": [ 00:12:59.725 { 00:12:59.725 "name": "spare", 00:12:59.725 "uuid": "7542b234-36dc-551f-a31f-bf761e7a46c0", 00:12:59.725 "is_configured": true, 00:12:59.725 "data_offset": 2048, 00:12:59.725 "data_size": 63488 00:12:59.725 }, 00:12:59.725 { 00:12:59.725 "name": null, 00:12:59.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.725 "is_configured": false, 00:12:59.725 "data_offset": 2048, 00:12:59.725 "data_size": 63488 00:12:59.725 }, 00:12:59.725 { 00:12:59.725 "name": "BaseBdev3", 00:12:59.725 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:12:59.725 "is_configured": true, 00:12:59.725 "data_offset": 2048, 00:12:59.725 "data_size": 63488 00:12:59.725 }, 00:12:59.725 { 00:12:59.725 "name": "BaseBdev4", 00:12:59.725 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:12:59.725 "is_configured": true, 00:12:59.725 "data_offset": 2048, 00:12:59.725 "data_size": 63488 00:12:59.725 } 00:12:59.725 ] 00:12:59.725 }' 00:12:59.725 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.725 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:59.725 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.725 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:59.725 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:59.725 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.725 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.985 [2024-11-28 16:25:51.496626] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:59.985 [2024-11-28 16:25:51.534287] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:59.985 [2024-11-28 16:25:51.534354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.985 [2024-11-28 16:25:51.534371] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:59.985 [2024-11-28 16:25:51.534379] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:59.985 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.985 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:59.985 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.985 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.985 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.985 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.985 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:59.985 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.985 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.985 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.985 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.985 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.985 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.985 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.985 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.985 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.985 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.985 "name": "raid_bdev1", 00:12:59.985 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:12:59.985 "strip_size_kb": 0, 00:12:59.985 "state": "online", 00:12:59.985 "raid_level": "raid1", 00:12:59.985 "superblock": true, 00:12:59.985 "num_base_bdevs": 4, 00:12:59.985 "num_base_bdevs_discovered": 2, 00:12:59.985 "num_base_bdevs_operational": 2, 00:12:59.985 "base_bdevs_list": [ 00:12:59.985 { 00:12:59.985 "name": null, 00:12:59.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.985 "is_configured": false, 00:12:59.985 "data_offset": 0, 00:12:59.985 "data_size": 63488 00:12:59.985 }, 00:12:59.985 { 00:12:59.985 "name": null, 00:12:59.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.985 "is_configured": false, 00:12:59.985 "data_offset": 2048, 00:12:59.985 "data_size": 63488 00:12:59.985 }, 00:12:59.985 { 00:12:59.985 "name": "BaseBdev3", 00:12:59.985 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:12:59.985 "is_configured": true, 00:12:59.985 "data_offset": 2048, 00:12:59.985 "data_size": 63488 00:12:59.985 }, 00:12:59.985 { 00:12:59.985 "name": "BaseBdev4", 00:12:59.985 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:12:59.985 "is_configured": true, 00:12:59.985 "data_offset": 2048, 00:12:59.985 "data_size": 63488 00:12:59.985 } 00:12:59.985 ] 00:12:59.985 }' 00:12:59.985 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.985 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.245 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:00.245 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.245 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.245 [2024-11-28 16:25:51.949915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:00.245 [2024-11-28 16:25:51.949984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.245 [2024-11-28 16:25:51.950011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:00.245 [2024-11-28 16:25:51.950022] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.245 [2024-11-28 16:25:51.950485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.245 [2024-11-28 16:25:51.950505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:00.245 [2024-11-28 16:25:51.950598] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:00.245 [2024-11-28 16:25:51.950613] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:00.245 [2024-11-28 16:25:51.950623] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:00.245 [2024-11-28 16:25:51.950647] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:00.245 [2024-11-28 16:25:51.954421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:13:00.245 spare 00:13:00.245 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.245 16:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:00.245 [2024-11-28 16:25:51.956292] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:01.627 16:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.627 16:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.627 16:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.627 16:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.627 16:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.627 16:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.627 16:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.627 16:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.627 16:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.627 16:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.627 "name": "raid_bdev1", 00:13:01.627 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:13:01.627 "strip_size_kb": 0, 00:13:01.627 "state": "online", 00:13:01.627 "raid_level": "raid1", 00:13:01.627 "superblock": true, 00:13:01.627 "num_base_bdevs": 4, 00:13:01.627 "num_base_bdevs_discovered": 3, 00:13:01.627 "num_base_bdevs_operational": 3, 00:13:01.627 "process": { 00:13:01.627 "type": "rebuild", 00:13:01.627 "target": "spare", 00:13:01.627 "progress": { 00:13:01.627 "blocks": 20480, 00:13:01.627 "percent": 32 00:13:01.627 } 00:13:01.627 }, 00:13:01.627 "base_bdevs_list": [ 00:13:01.627 { 00:13:01.627 "name": "spare", 00:13:01.627 "uuid": "7542b234-36dc-551f-a31f-bf761e7a46c0", 00:13:01.627 "is_configured": true, 00:13:01.627 "data_offset": 2048, 00:13:01.627 "data_size": 63488 00:13:01.627 }, 00:13:01.627 { 00:13:01.627 "name": null, 00:13:01.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.627 "is_configured": false, 00:13:01.627 "data_offset": 2048, 00:13:01.627 "data_size": 63488 00:13:01.627 }, 00:13:01.627 { 00:13:01.627 "name": "BaseBdev3", 00:13:01.627 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:13:01.627 "is_configured": true, 00:13:01.627 "data_offset": 2048, 00:13:01.627 "data_size": 63488 00:13:01.627 }, 00:13:01.627 { 00:13:01.627 "name": "BaseBdev4", 00:13:01.627 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:13:01.627 "is_configured": true, 00:13:01.627 "data_offset": 2048, 00:13:01.627 "data_size": 63488 00:13:01.627 } 00:13:01.627 ] 00:13:01.627 }' 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.627 [2024-11-28 16:25:53.097030] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.627 [2024-11-28 16:25:53.160529] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:01.627 [2024-11-28 16:25:53.160582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.627 [2024-11-28 16:25:53.160599] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.627 [2024-11-28 16:25:53.160606] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.627 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.627 "name": "raid_bdev1", 00:13:01.627 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:13:01.627 "strip_size_kb": 0, 00:13:01.627 "state": "online", 00:13:01.627 "raid_level": "raid1", 00:13:01.627 "superblock": true, 00:13:01.627 "num_base_bdevs": 4, 00:13:01.627 "num_base_bdevs_discovered": 2, 00:13:01.627 "num_base_bdevs_operational": 2, 00:13:01.627 "base_bdevs_list": [ 00:13:01.627 { 00:13:01.627 "name": null, 00:13:01.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.627 "is_configured": false, 00:13:01.627 "data_offset": 0, 00:13:01.627 "data_size": 63488 00:13:01.627 }, 00:13:01.627 { 00:13:01.627 "name": null, 00:13:01.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.628 "is_configured": false, 00:13:01.628 "data_offset": 2048, 00:13:01.628 "data_size": 63488 00:13:01.628 }, 00:13:01.628 { 00:13:01.628 "name": "BaseBdev3", 00:13:01.628 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:13:01.628 "is_configured": true, 00:13:01.628 "data_offset": 2048, 00:13:01.628 "data_size": 63488 00:13:01.628 }, 00:13:01.628 { 00:13:01.628 "name": "BaseBdev4", 00:13:01.628 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:13:01.628 "is_configured": true, 00:13:01.628 "data_offset": 2048, 00:13:01.628 "data_size": 63488 00:13:01.628 } 00:13:01.628 ] 00:13:01.628 }' 00:13:01.628 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.628 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.887 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:01.887 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.887 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:01.887 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:01.887 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.887 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.887 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.887 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.887 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.887 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.887 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.887 "name": "raid_bdev1", 00:13:01.887 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:13:01.887 "strip_size_kb": 0, 00:13:01.887 "state": "online", 00:13:01.887 "raid_level": "raid1", 00:13:01.887 "superblock": true, 00:13:01.887 "num_base_bdevs": 4, 00:13:01.887 "num_base_bdevs_discovered": 2, 00:13:01.887 "num_base_bdevs_operational": 2, 00:13:01.887 "base_bdevs_list": [ 00:13:01.887 { 00:13:01.887 "name": null, 00:13:01.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.887 "is_configured": false, 00:13:01.887 "data_offset": 0, 00:13:01.887 "data_size": 63488 00:13:01.887 }, 00:13:01.887 { 00:13:01.887 "name": null, 00:13:01.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.887 "is_configured": false, 00:13:01.887 "data_offset": 2048, 00:13:01.887 "data_size": 63488 00:13:01.887 }, 00:13:01.887 { 00:13:01.887 "name": "BaseBdev3", 00:13:01.887 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:13:01.887 "is_configured": true, 00:13:01.887 "data_offset": 2048, 00:13:01.887 "data_size": 63488 00:13:01.887 }, 00:13:01.887 { 00:13:01.887 "name": "BaseBdev4", 00:13:01.887 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:13:01.887 "is_configured": true, 00:13:01.887 "data_offset": 2048, 00:13:01.887 "data_size": 63488 00:13:01.887 } 00:13:01.887 ] 00:13:01.887 }' 00:13:01.887 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.147 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:02.147 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.147 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:02.147 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:02.147 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.147 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.147 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.147 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:02.147 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.147 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.147 [2024-11-28 16:25:53.735882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:02.147 [2024-11-28 16:25:53.735935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.147 [2024-11-28 16:25:53.735956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:13:02.147 [2024-11-28 16:25:53.735965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.147 [2024-11-28 16:25:53.736417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.147 [2024-11-28 16:25:53.736450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:02.147 [2024-11-28 16:25:53.736525] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:02.148 [2024-11-28 16:25:53.736548] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:02.148 [2024-11-28 16:25:53.736559] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:02.148 [2024-11-28 16:25:53.736577] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:02.148 BaseBdev1 00:13:02.148 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.148 16:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:03.087 16:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:03.087 16:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.087 16:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.087 16:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.087 16:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.087 16:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:03.087 16:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.088 16:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.088 16:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.088 16:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.088 16:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.088 16:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.088 16:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.088 16:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.088 16:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.088 16:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.088 "name": "raid_bdev1", 00:13:03.088 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:13:03.088 "strip_size_kb": 0, 00:13:03.088 "state": "online", 00:13:03.088 "raid_level": "raid1", 00:13:03.088 "superblock": true, 00:13:03.088 "num_base_bdevs": 4, 00:13:03.088 "num_base_bdevs_discovered": 2, 00:13:03.088 "num_base_bdevs_operational": 2, 00:13:03.088 "base_bdevs_list": [ 00:13:03.088 { 00:13:03.088 "name": null, 00:13:03.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.088 "is_configured": false, 00:13:03.088 "data_offset": 0, 00:13:03.088 "data_size": 63488 00:13:03.088 }, 00:13:03.088 { 00:13:03.088 "name": null, 00:13:03.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.088 "is_configured": false, 00:13:03.088 "data_offset": 2048, 00:13:03.088 "data_size": 63488 00:13:03.088 }, 00:13:03.088 { 00:13:03.088 "name": "BaseBdev3", 00:13:03.088 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:13:03.088 "is_configured": true, 00:13:03.088 "data_offset": 2048, 00:13:03.088 "data_size": 63488 00:13:03.088 }, 00:13:03.088 { 00:13:03.088 "name": "BaseBdev4", 00:13:03.088 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:13:03.088 "is_configured": true, 00:13:03.088 "data_offset": 2048, 00:13:03.088 "data_size": 63488 00:13:03.088 } 00:13:03.088 ] 00:13:03.088 }' 00:13:03.088 16:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.088 16:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.657 "name": "raid_bdev1", 00:13:03.657 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:13:03.657 "strip_size_kb": 0, 00:13:03.657 "state": "online", 00:13:03.657 "raid_level": "raid1", 00:13:03.657 "superblock": true, 00:13:03.657 "num_base_bdevs": 4, 00:13:03.657 "num_base_bdevs_discovered": 2, 00:13:03.657 "num_base_bdevs_operational": 2, 00:13:03.657 "base_bdevs_list": [ 00:13:03.657 { 00:13:03.657 "name": null, 00:13:03.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.657 "is_configured": false, 00:13:03.657 "data_offset": 0, 00:13:03.657 "data_size": 63488 00:13:03.657 }, 00:13:03.657 { 00:13:03.657 "name": null, 00:13:03.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.657 "is_configured": false, 00:13:03.657 "data_offset": 2048, 00:13:03.657 "data_size": 63488 00:13:03.657 }, 00:13:03.657 { 00:13:03.657 "name": "BaseBdev3", 00:13:03.657 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:13:03.657 "is_configured": true, 00:13:03.657 "data_offset": 2048, 00:13:03.657 "data_size": 63488 00:13:03.657 }, 00:13:03.657 { 00:13:03.657 "name": "BaseBdev4", 00:13:03.657 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:13:03.657 "is_configured": true, 00:13:03.657 "data_offset": 2048, 00:13:03.657 "data_size": 63488 00:13:03.657 } 00:13:03.657 ] 00:13:03.657 }' 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.657 [2024-11-28 16:25:55.337389] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:03.657 [2024-11-28 16:25:55.337605] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:03.657 [2024-11-28 16:25:55.337676] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:03.657 request: 00:13:03.657 { 00:13:03.657 "base_bdev": "BaseBdev1", 00:13:03.657 "raid_bdev": "raid_bdev1", 00:13:03.657 "method": "bdev_raid_add_base_bdev", 00:13:03.657 "req_id": 1 00:13:03.657 } 00:13:03.657 Got JSON-RPC error response 00:13:03.657 response: 00:13:03.657 { 00:13:03.657 "code": -22, 00:13:03.657 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:03.657 } 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:03.657 16:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:04.631 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:04.631 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.631 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.631 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.631 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.631 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:04.631 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.631 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.631 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.631 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.631 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.631 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.631 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.631 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.631 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.891 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.891 "name": "raid_bdev1", 00:13:04.891 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:13:04.891 "strip_size_kb": 0, 00:13:04.891 "state": "online", 00:13:04.891 "raid_level": "raid1", 00:13:04.891 "superblock": true, 00:13:04.891 "num_base_bdevs": 4, 00:13:04.891 "num_base_bdevs_discovered": 2, 00:13:04.891 "num_base_bdevs_operational": 2, 00:13:04.891 "base_bdevs_list": [ 00:13:04.891 { 00:13:04.891 "name": null, 00:13:04.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.891 "is_configured": false, 00:13:04.891 "data_offset": 0, 00:13:04.891 "data_size": 63488 00:13:04.891 }, 00:13:04.891 { 00:13:04.891 "name": null, 00:13:04.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.891 "is_configured": false, 00:13:04.891 "data_offset": 2048, 00:13:04.891 "data_size": 63488 00:13:04.891 }, 00:13:04.891 { 00:13:04.891 "name": "BaseBdev3", 00:13:04.891 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:13:04.891 "is_configured": true, 00:13:04.891 "data_offset": 2048, 00:13:04.891 "data_size": 63488 00:13:04.891 }, 00:13:04.891 { 00:13:04.891 "name": "BaseBdev4", 00:13:04.891 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:13:04.891 "is_configured": true, 00:13:04.891 "data_offset": 2048, 00:13:04.891 "data_size": 63488 00:13:04.891 } 00:13:04.891 ] 00:13:04.891 }' 00:13:04.891 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.891 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.151 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:05.152 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.152 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:05.152 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:05.152 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.152 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.152 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.152 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.152 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.152 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.152 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.152 "name": "raid_bdev1", 00:13:05.152 "uuid": "37fafcf3-3d44-4379-827e-f0cccf09d170", 00:13:05.152 "strip_size_kb": 0, 00:13:05.152 "state": "online", 00:13:05.152 "raid_level": "raid1", 00:13:05.152 "superblock": true, 00:13:05.152 "num_base_bdevs": 4, 00:13:05.152 "num_base_bdevs_discovered": 2, 00:13:05.152 "num_base_bdevs_operational": 2, 00:13:05.152 "base_bdevs_list": [ 00:13:05.152 { 00:13:05.152 "name": null, 00:13:05.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.152 "is_configured": false, 00:13:05.152 "data_offset": 0, 00:13:05.152 "data_size": 63488 00:13:05.152 }, 00:13:05.152 { 00:13:05.152 "name": null, 00:13:05.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.152 "is_configured": false, 00:13:05.152 "data_offset": 2048, 00:13:05.152 "data_size": 63488 00:13:05.152 }, 00:13:05.152 { 00:13:05.152 "name": "BaseBdev3", 00:13:05.152 "uuid": "f4bb20e5-af16-5abc-bdb9-3ff1c4ed35e7", 00:13:05.152 "is_configured": true, 00:13:05.152 "data_offset": 2048, 00:13:05.152 "data_size": 63488 00:13:05.152 }, 00:13:05.152 { 00:13:05.152 "name": "BaseBdev4", 00:13:05.152 "uuid": "1889fec5-0ed0-5b73-96a6-d4ccbc3e8884", 00:13:05.152 "is_configured": true, 00:13:05.152 "data_offset": 2048, 00:13:05.152 "data_size": 63488 00:13:05.152 } 00:13:05.152 ] 00:13:05.152 }' 00:13:05.152 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.152 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:05.152 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.412 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:05.412 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89754 00:13:05.412 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 89754 ']' 00:13:05.412 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 89754 00:13:05.412 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:13:05.412 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:05.412 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89754 00:13:05.412 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:05.412 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:05.412 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89754' 00:13:05.412 killing process with pid 89754 00:13:05.412 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 89754 00:13:05.412 Received shutdown signal, test time was about 17.543016 seconds 00:13:05.412 00:13:05.412 Latency(us) 00:13:05.412 [2024-11-28T16:25:57.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.412 [2024-11-28T16:25:57.183Z] =================================================================================================================== 00:13:05.412 [2024-11-28T16:25:57.183Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:05.412 [2024-11-28 16:25:56.974649] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:05.412 16:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 89754 00:13:05.412 [2024-11-28 16:25:56.974828] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.412 [2024-11-28 16:25:56.974924] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.412 [2024-11-28 16:25:56.974939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:13:05.412 [2024-11-28 16:25:57.022456] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:05.673 16:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:05.673 00:13:05.673 real 0m19.477s 00:13:05.673 user 0m25.837s 00:13:05.673 sys 0m2.353s 00:13:05.673 16:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:05.673 ************************************ 00:13:05.673 END TEST raid_rebuild_test_sb_io 00:13:05.673 ************************************ 00:13:05.673 16:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.673 16:25:57 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:05.673 16:25:57 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:13:05.673 16:25:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:05.673 16:25:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:05.673 16:25:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:05.673 ************************************ 00:13:05.673 START TEST raid5f_state_function_test 00:13:05.673 ************************************ 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=90459 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90459' 00:13:05.673 Process raid pid: 90459 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 90459 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 90459 ']' 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:05.673 16:25:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.673 [2024-11-28 16:25:57.438904] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:05.673 [2024-11-28 16:25:57.439043] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.934 [2024-11-28 16:25:57.600336] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.934 [2024-11-28 16:25:57.647181] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.934 [2024-11-28 16:25:57.690078] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:05.934 [2024-11-28 16:25:57.690198] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.505 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:06.505 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:06.505 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:06.505 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.505 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.505 [2024-11-28 16:25:58.259995] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:06.505 [2024-11-28 16:25:58.260060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:06.505 [2024-11-28 16:25:58.260074] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:06.505 [2024-11-28 16:25:58.260084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:06.505 [2024-11-28 16:25:58.260090] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:06.505 [2024-11-28 16:25:58.260102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:06.505 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.505 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:06.505 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.505 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.505 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:06.505 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.505 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.505 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.505 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.505 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.505 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.505 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.505 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.505 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.505 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.766 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.766 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.766 "name": "Existed_Raid", 00:13:06.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.766 "strip_size_kb": 64, 00:13:06.766 "state": "configuring", 00:13:06.766 "raid_level": "raid5f", 00:13:06.766 "superblock": false, 00:13:06.766 "num_base_bdevs": 3, 00:13:06.766 "num_base_bdevs_discovered": 0, 00:13:06.766 "num_base_bdevs_operational": 3, 00:13:06.766 "base_bdevs_list": [ 00:13:06.766 { 00:13:06.766 "name": "BaseBdev1", 00:13:06.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.766 "is_configured": false, 00:13:06.766 "data_offset": 0, 00:13:06.766 "data_size": 0 00:13:06.766 }, 00:13:06.766 { 00:13:06.766 "name": "BaseBdev2", 00:13:06.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.766 "is_configured": false, 00:13:06.766 "data_offset": 0, 00:13:06.766 "data_size": 0 00:13:06.766 }, 00:13:06.766 { 00:13:06.766 "name": "BaseBdev3", 00:13:06.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.766 "is_configured": false, 00:13:06.766 "data_offset": 0, 00:13:06.766 "data_size": 0 00:13:06.766 } 00:13:06.766 ] 00:13:06.766 }' 00:13:06.766 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.766 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.027 [2024-11-28 16:25:58.699338] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:07.027 [2024-11-28 16:25:58.699426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.027 [2024-11-28 16:25:58.711357] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:07.027 [2024-11-28 16:25:58.711397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:07.027 [2024-11-28 16:25:58.711406] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:07.027 [2024-11-28 16:25:58.711415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:07.027 [2024-11-28 16:25:58.711421] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:07.027 [2024-11-28 16:25:58.711429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.027 [2024-11-28 16:25:58.732316] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:07.027 BaseBdev1 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.027 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.027 [ 00:13:07.027 { 00:13:07.027 "name": "BaseBdev1", 00:13:07.027 "aliases": [ 00:13:07.027 "b5649bde-1690-44f2-bcd2-7ad0af2db1b8" 00:13:07.027 ], 00:13:07.027 "product_name": "Malloc disk", 00:13:07.027 "block_size": 512, 00:13:07.027 "num_blocks": 65536, 00:13:07.027 "uuid": "b5649bde-1690-44f2-bcd2-7ad0af2db1b8", 00:13:07.027 "assigned_rate_limits": { 00:13:07.027 "rw_ios_per_sec": 0, 00:13:07.027 "rw_mbytes_per_sec": 0, 00:13:07.027 "r_mbytes_per_sec": 0, 00:13:07.027 "w_mbytes_per_sec": 0 00:13:07.027 }, 00:13:07.027 "claimed": true, 00:13:07.027 "claim_type": "exclusive_write", 00:13:07.027 "zoned": false, 00:13:07.027 "supported_io_types": { 00:13:07.027 "read": true, 00:13:07.027 "write": true, 00:13:07.027 "unmap": true, 00:13:07.027 "flush": true, 00:13:07.027 "reset": true, 00:13:07.027 "nvme_admin": false, 00:13:07.027 "nvme_io": false, 00:13:07.027 "nvme_io_md": false, 00:13:07.027 "write_zeroes": true, 00:13:07.027 "zcopy": true, 00:13:07.027 "get_zone_info": false, 00:13:07.027 "zone_management": false, 00:13:07.027 "zone_append": false, 00:13:07.027 "compare": false, 00:13:07.027 "compare_and_write": false, 00:13:07.027 "abort": true, 00:13:07.027 "seek_hole": false, 00:13:07.027 "seek_data": false, 00:13:07.027 "copy": true, 00:13:07.027 "nvme_iov_md": false 00:13:07.027 }, 00:13:07.027 "memory_domains": [ 00:13:07.027 { 00:13:07.028 "dma_device_id": "system", 00:13:07.028 "dma_device_type": 1 00:13:07.028 }, 00:13:07.028 { 00:13:07.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.028 "dma_device_type": 2 00:13:07.028 } 00:13:07.028 ], 00:13:07.028 "driver_specific": {} 00:13:07.028 } 00:13:07.028 ] 00:13:07.028 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.028 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:07.028 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:07.028 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.028 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.028 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:07.028 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.028 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:07.028 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.028 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.028 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.028 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.028 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.028 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.028 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.028 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.028 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.288 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.288 "name": "Existed_Raid", 00:13:07.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.288 "strip_size_kb": 64, 00:13:07.288 "state": "configuring", 00:13:07.288 "raid_level": "raid5f", 00:13:07.288 "superblock": false, 00:13:07.288 "num_base_bdevs": 3, 00:13:07.288 "num_base_bdevs_discovered": 1, 00:13:07.288 "num_base_bdevs_operational": 3, 00:13:07.288 "base_bdevs_list": [ 00:13:07.288 { 00:13:07.288 "name": "BaseBdev1", 00:13:07.288 "uuid": "b5649bde-1690-44f2-bcd2-7ad0af2db1b8", 00:13:07.289 "is_configured": true, 00:13:07.289 "data_offset": 0, 00:13:07.289 "data_size": 65536 00:13:07.289 }, 00:13:07.289 { 00:13:07.289 "name": "BaseBdev2", 00:13:07.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.289 "is_configured": false, 00:13:07.289 "data_offset": 0, 00:13:07.289 "data_size": 0 00:13:07.289 }, 00:13:07.289 { 00:13:07.289 "name": "BaseBdev3", 00:13:07.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.289 "is_configured": false, 00:13:07.289 "data_offset": 0, 00:13:07.289 "data_size": 0 00:13:07.289 } 00:13:07.289 ] 00:13:07.289 }' 00:13:07.289 16:25:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.289 16:25:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.549 [2024-11-28 16:25:59.175755] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:07.549 [2024-11-28 16:25:59.175864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.549 [2024-11-28 16:25:59.187792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:07.549 [2024-11-28 16:25:59.189659] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:07.549 [2024-11-28 16:25:59.189737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:07.549 [2024-11-28 16:25:59.189770] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:07.549 [2024-11-28 16:25:59.189814] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.549 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.550 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.550 "name": "Existed_Raid", 00:13:07.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.550 "strip_size_kb": 64, 00:13:07.550 "state": "configuring", 00:13:07.550 "raid_level": "raid5f", 00:13:07.550 "superblock": false, 00:13:07.550 "num_base_bdevs": 3, 00:13:07.550 "num_base_bdevs_discovered": 1, 00:13:07.550 "num_base_bdevs_operational": 3, 00:13:07.550 "base_bdevs_list": [ 00:13:07.550 { 00:13:07.550 "name": "BaseBdev1", 00:13:07.550 "uuid": "b5649bde-1690-44f2-bcd2-7ad0af2db1b8", 00:13:07.550 "is_configured": true, 00:13:07.550 "data_offset": 0, 00:13:07.550 "data_size": 65536 00:13:07.550 }, 00:13:07.550 { 00:13:07.550 "name": "BaseBdev2", 00:13:07.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.550 "is_configured": false, 00:13:07.550 "data_offset": 0, 00:13:07.550 "data_size": 0 00:13:07.550 }, 00:13:07.550 { 00:13:07.550 "name": "BaseBdev3", 00:13:07.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.550 "is_configured": false, 00:13:07.550 "data_offset": 0, 00:13:07.550 "data_size": 0 00:13:07.550 } 00:13:07.550 ] 00:13:07.550 }' 00:13:07.550 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.550 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.121 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:08.121 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.121 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.121 [2024-11-28 16:25:59.669360] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:08.121 BaseBdev2 00:13:08.121 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.121 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:08.121 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:08.121 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:08.121 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:08.121 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:08.121 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:08.121 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:08.121 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.121 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.121 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.121 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:08.121 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.121 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.121 [ 00:13:08.121 { 00:13:08.121 "name": "BaseBdev2", 00:13:08.121 "aliases": [ 00:13:08.121 "68912ba9-f3df-43cb-9cdc-1796316f0762" 00:13:08.121 ], 00:13:08.121 "product_name": "Malloc disk", 00:13:08.121 "block_size": 512, 00:13:08.121 "num_blocks": 65536, 00:13:08.121 "uuid": "68912ba9-f3df-43cb-9cdc-1796316f0762", 00:13:08.121 "assigned_rate_limits": { 00:13:08.121 "rw_ios_per_sec": 0, 00:13:08.121 "rw_mbytes_per_sec": 0, 00:13:08.121 "r_mbytes_per_sec": 0, 00:13:08.121 "w_mbytes_per_sec": 0 00:13:08.121 }, 00:13:08.121 "claimed": true, 00:13:08.121 "claim_type": "exclusive_write", 00:13:08.121 "zoned": false, 00:13:08.121 "supported_io_types": { 00:13:08.121 "read": true, 00:13:08.121 "write": true, 00:13:08.121 "unmap": true, 00:13:08.121 "flush": true, 00:13:08.121 "reset": true, 00:13:08.121 "nvme_admin": false, 00:13:08.121 "nvme_io": false, 00:13:08.121 "nvme_io_md": false, 00:13:08.121 "write_zeroes": true, 00:13:08.121 "zcopy": true, 00:13:08.121 "get_zone_info": false, 00:13:08.121 "zone_management": false, 00:13:08.121 "zone_append": false, 00:13:08.121 "compare": false, 00:13:08.121 "compare_and_write": false, 00:13:08.121 "abort": true, 00:13:08.121 "seek_hole": false, 00:13:08.121 "seek_data": false, 00:13:08.121 "copy": true, 00:13:08.121 "nvme_iov_md": false 00:13:08.121 }, 00:13:08.121 "memory_domains": [ 00:13:08.121 { 00:13:08.121 "dma_device_id": "system", 00:13:08.121 "dma_device_type": 1 00:13:08.121 }, 00:13:08.121 { 00:13:08.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.121 "dma_device_type": 2 00:13:08.121 } 00:13:08.121 ], 00:13:08.121 "driver_specific": {} 00:13:08.121 } 00:13:08.121 ] 00:13:08.121 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.121 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:08.121 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:08.121 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:08.121 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:08.122 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:08.122 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:08.122 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:08.122 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.122 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:08.122 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.122 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.122 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.122 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.122 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.122 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.122 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.122 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.122 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.122 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.122 "name": "Existed_Raid", 00:13:08.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.122 "strip_size_kb": 64, 00:13:08.122 "state": "configuring", 00:13:08.122 "raid_level": "raid5f", 00:13:08.122 "superblock": false, 00:13:08.122 "num_base_bdevs": 3, 00:13:08.122 "num_base_bdevs_discovered": 2, 00:13:08.122 "num_base_bdevs_operational": 3, 00:13:08.122 "base_bdevs_list": [ 00:13:08.122 { 00:13:08.122 "name": "BaseBdev1", 00:13:08.122 "uuid": "b5649bde-1690-44f2-bcd2-7ad0af2db1b8", 00:13:08.122 "is_configured": true, 00:13:08.122 "data_offset": 0, 00:13:08.122 "data_size": 65536 00:13:08.122 }, 00:13:08.122 { 00:13:08.122 "name": "BaseBdev2", 00:13:08.122 "uuid": "68912ba9-f3df-43cb-9cdc-1796316f0762", 00:13:08.122 "is_configured": true, 00:13:08.122 "data_offset": 0, 00:13:08.122 "data_size": 65536 00:13:08.122 }, 00:13:08.122 { 00:13:08.122 "name": "BaseBdev3", 00:13:08.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.122 "is_configured": false, 00:13:08.122 "data_offset": 0, 00:13:08.122 "data_size": 0 00:13:08.122 } 00:13:08.122 ] 00:13:08.122 }' 00:13:08.122 16:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.122 16:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.383 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:08.383 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.383 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.383 [2024-11-28 16:26:00.143539] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:08.383 [2024-11-28 16:26:00.143640] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:08.383 [2024-11-28 16:26:00.143667] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:08.383 [2024-11-28 16:26:00.144071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:08.383 [2024-11-28 16:26:00.144527] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:08.383 [2024-11-28 16:26:00.144545] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:08.383 [2024-11-28 16:26:00.144740] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.383 BaseBdev3 00:13:08.383 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.383 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:08.383 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:08.383 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:08.383 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:08.383 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:08.383 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:08.383 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:08.383 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.383 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.644 [ 00:13:08.644 { 00:13:08.644 "name": "BaseBdev3", 00:13:08.644 "aliases": [ 00:13:08.644 "a5586599-61da-4129-afe4-c4748415f5e5" 00:13:08.644 ], 00:13:08.644 "product_name": "Malloc disk", 00:13:08.644 "block_size": 512, 00:13:08.644 "num_blocks": 65536, 00:13:08.644 "uuid": "a5586599-61da-4129-afe4-c4748415f5e5", 00:13:08.644 "assigned_rate_limits": { 00:13:08.644 "rw_ios_per_sec": 0, 00:13:08.644 "rw_mbytes_per_sec": 0, 00:13:08.644 "r_mbytes_per_sec": 0, 00:13:08.644 "w_mbytes_per_sec": 0 00:13:08.644 }, 00:13:08.644 "claimed": true, 00:13:08.644 "claim_type": "exclusive_write", 00:13:08.644 "zoned": false, 00:13:08.644 "supported_io_types": { 00:13:08.644 "read": true, 00:13:08.644 "write": true, 00:13:08.644 "unmap": true, 00:13:08.644 "flush": true, 00:13:08.644 "reset": true, 00:13:08.644 "nvme_admin": false, 00:13:08.644 "nvme_io": false, 00:13:08.644 "nvme_io_md": false, 00:13:08.644 "write_zeroes": true, 00:13:08.644 "zcopy": true, 00:13:08.644 "get_zone_info": false, 00:13:08.644 "zone_management": false, 00:13:08.644 "zone_append": false, 00:13:08.644 "compare": false, 00:13:08.644 "compare_and_write": false, 00:13:08.644 "abort": true, 00:13:08.644 "seek_hole": false, 00:13:08.644 "seek_data": false, 00:13:08.644 "copy": true, 00:13:08.644 "nvme_iov_md": false 00:13:08.644 }, 00:13:08.644 "memory_domains": [ 00:13:08.644 { 00:13:08.644 "dma_device_id": "system", 00:13:08.644 "dma_device_type": 1 00:13:08.644 }, 00:13:08.644 { 00:13:08.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.644 "dma_device_type": 2 00:13:08.644 } 00:13:08.644 ], 00:13:08.644 "driver_specific": {} 00:13:08.644 } 00:13:08.644 ] 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.644 "name": "Existed_Raid", 00:13:08.644 "uuid": "68d755e9-1224-4d38-94b7-f0ab01a0cd2c", 00:13:08.644 "strip_size_kb": 64, 00:13:08.644 "state": "online", 00:13:08.644 "raid_level": "raid5f", 00:13:08.644 "superblock": false, 00:13:08.644 "num_base_bdevs": 3, 00:13:08.644 "num_base_bdevs_discovered": 3, 00:13:08.644 "num_base_bdevs_operational": 3, 00:13:08.644 "base_bdevs_list": [ 00:13:08.644 { 00:13:08.644 "name": "BaseBdev1", 00:13:08.644 "uuid": "b5649bde-1690-44f2-bcd2-7ad0af2db1b8", 00:13:08.644 "is_configured": true, 00:13:08.644 "data_offset": 0, 00:13:08.644 "data_size": 65536 00:13:08.644 }, 00:13:08.644 { 00:13:08.644 "name": "BaseBdev2", 00:13:08.644 "uuid": "68912ba9-f3df-43cb-9cdc-1796316f0762", 00:13:08.644 "is_configured": true, 00:13:08.644 "data_offset": 0, 00:13:08.644 "data_size": 65536 00:13:08.644 }, 00:13:08.644 { 00:13:08.644 "name": "BaseBdev3", 00:13:08.644 "uuid": "a5586599-61da-4129-afe4-c4748415f5e5", 00:13:08.644 "is_configured": true, 00:13:08.644 "data_offset": 0, 00:13:08.644 "data_size": 65536 00:13:08.644 } 00:13:08.644 ] 00:13:08.644 }' 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.644 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.215 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:09.215 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:09.215 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:09.215 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:09.215 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:09.215 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:09.215 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:09.215 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:09.215 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.215 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.215 [2024-11-28 16:26:00.706808] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:09.215 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.215 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:09.215 "name": "Existed_Raid", 00:13:09.215 "aliases": [ 00:13:09.215 "68d755e9-1224-4d38-94b7-f0ab01a0cd2c" 00:13:09.215 ], 00:13:09.215 "product_name": "Raid Volume", 00:13:09.215 "block_size": 512, 00:13:09.215 "num_blocks": 131072, 00:13:09.215 "uuid": "68d755e9-1224-4d38-94b7-f0ab01a0cd2c", 00:13:09.215 "assigned_rate_limits": { 00:13:09.215 "rw_ios_per_sec": 0, 00:13:09.215 "rw_mbytes_per_sec": 0, 00:13:09.215 "r_mbytes_per_sec": 0, 00:13:09.215 "w_mbytes_per_sec": 0 00:13:09.215 }, 00:13:09.215 "claimed": false, 00:13:09.215 "zoned": false, 00:13:09.215 "supported_io_types": { 00:13:09.216 "read": true, 00:13:09.216 "write": true, 00:13:09.216 "unmap": false, 00:13:09.216 "flush": false, 00:13:09.216 "reset": true, 00:13:09.216 "nvme_admin": false, 00:13:09.216 "nvme_io": false, 00:13:09.216 "nvme_io_md": false, 00:13:09.216 "write_zeroes": true, 00:13:09.216 "zcopy": false, 00:13:09.216 "get_zone_info": false, 00:13:09.216 "zone_management": false, 00:13:09.216 "zone_append": false, 00:13:09.216 "compare": false, 00:13:09.216 "compare_and_write": false, 00:13:09.216 "abort": false, 00:13:09.216 "seek_hole": false, 00:13:09.216 "seek_data": false, 00:13:09.216 "copy": false, 00:13:09.216 "nvme_iov_md": false 00:13:09.216 }, 00:13:09.216 "driver_specific": { 00:13:09.216 "raid": { 00:13:09.216 "uuid": "68d755e9-1224-4d38-94b7-f0ab01a0cd2c", 00:13:09.216 "strip_size_kb": 64, 00:13:09.216 "state": "online", 00:13:09.216 "raid_level": "raid5f", 00:13:09.216 "superblock": false, 00:13:09.216 "num_base_bdevs": 3, 00:13:09.216 "num_base_bdevs_discovered": 3, 00:13:09.216 "num_base_bdevs_operational": 3, 00:13:09.216 "base_bdevs_list": [ 00:13:09.216 { 00:13:09.216 "name": "BaseBdev1", 00:13:09.216 "uuid": "b5649bde-1690-44f2-bcd2-7ad0af2db1b8", 00:13:09.216 "is_configured": true, 00:13:09.216 "data_offset": 0, 00:13:09.216 "data_size": 65536 00:13:09.216 }, 00:13:09.216 { 00:13:09.216 "name": "BaseBdev2", 00:13:09.216 "uuid": "68912ba9-f3df-43cb-9cdc-1796316f0762", 00:13:09.216 "is_configured": true, 00:13:09.216 "data_offset": 0, 00:13:09.216 "data_size": 65536 00:13:09.216 }, 00:13:09.216 { 00:13:09.216 "name": "BaseBdev3", 00:13:09.216 "uuid": "a5586599-61da-4129-afe4-c4748415f5e5", 00:13:09.216 "is_configured": true, 00:13:09.216 "data_offset": 0, 00:13:09.216 "data_size": 65536 00:13:09.216 } 00:13:09.216 ] 00:13:09.216 } 00:13:09.216 } 00:13:09.216 }' 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:09.216 BaseBdev2 00:13:09.216 BaseBdev3' 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.216 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.476 [2024-11-28 16:26:00.986185] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:09.476 16:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.476 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:09.476 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:09.476 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:09.476 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:09.476 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:09.476 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:09.476 16:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.476 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.476 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:09.476 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.476 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:09.476 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.476 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.476 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.476 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.476 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.476 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.476 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.476 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.476 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.476 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.476 "name": "Existed_Raid", 00:13:09.476 "uuid": "68d755e9-1224-4d38-94b7-f0ab01a0cd2c", 00:13:09.476 "strip_size_kb": 64, 00:13:09.476 "state": "online", 00:13:09.476 "raid_level": "raid5f", 00:13:09.476 "superblock": false, 00:13:09.476 "num_base_bdevs": 3, 00:13:09.476 "num_base_bdevs_discovered": 2, 00:13:09.476 "num_base_bdevs_operational": 2, 00:13:09.476 "base_bdevs_list": [ 00:13:09.476 { 00:13:09.476 "name": null, 00:13:09.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.476 "is_configured": false, 00:13:09.476 "data_offset": 0, 00:13:09.476 "data_size": 65536 00:13:09.476 }, 00:13:09.476 { 00:13:09.476 "name": "BaseBdev2", 00:13:09.476 "uuid": "68912ba9-f3df-43cb-9cdc-1796316f0762", 00:13:09.476 "is_configured": true, 00:13:09.476 "data_offset": 0, 00:13:09.476 "data_size": 65536 00:13:09.476 }, 00:13:09.476 { 00:13:09.476 "name": "BaseBdev3", 00:13:09.476 "uuid": "a5586599-61da-4129-afe4-c4748415f5e5", 00:13:09.476 "is_configured": true, 00:13:09.476 "data_offset": 0, 00:13:09.476 "data_size": 65536 00:13:09.476 } 00:13:09.476 ] 00:13:09.476 }' 00:13:09.476 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.476 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.736 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:09.736 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:09.736 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:09.736 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.736 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.736 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.736 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.736 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:09.736 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:09.736 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:09.736 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.736 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.736 [2024-11-28 16:26:01.464816] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:09.736 [2024-11-28 16:26:01.464946] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:09.736 [2024-11-28 16:26:01.476087] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.736 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.736 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:09.736 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:09.737 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.737 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.737 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:09.737 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.737 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.997 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.998 [2024-11-28 16:26:01.532020] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:09.998 [2024-11-28 16:26:01.532068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.998 BaseBdev2 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.998 [ 00:13:09.998 { 00:13:09.998 "name": "BaseBdev2", 00:13:09.998 "aliases": [ 00:13:09.998 "410aeef4-c728-417f-bf9d-0d33b0ef9a9d" 00:13:09.998 ], 00:13:09.998 "product_name": "Malloc disk", 00:13:09.998 "block_size": 512, 00:13:09.998 "num_blocks": 65536, 00:13:09.998 "uuid": "410aeef4-c728-417f-bf9d-0d33b0ef9a9d", 00:13:09.998 "assigned_rate_limits": { 00:13:09.998 "rw_ios_per_sec": 0, 00:13:09.998 "rw_mbytes_per_sec": 0, 00:13:09.998 "r_mbytes_per_sec": 0, 00:13:09.998 "w_mbytes_per_sec": 0 00:13:09.998 }, 00:13:09.998 "claimed": false, 00:13:09.998 "zoned": false, 00:13:09.998 "supported_io_types": { 00:13:09.998 "read": true, 00:13:09.998 "write": true, 00:13:09.998 "unmap": true, 00:13:09.998 "flush": true, 00:13:09.998 "reset": true, 00:13:09.998 "nvme_admin": false, 00:13:09.998 "nvme_io": false, 00:13:09.998 "nvme_io_md": false, 00:13:09.998 "write_zeroes": true, 00:13:09.998 "zcopy": true, 00:13:09.998 "get_zone_info": false, 00:13:09.998 "zone_management": false, 00:13:09.998 "zone_append": false, 00:13:09.998 "compare": false, 00:13:09.998 "compare_and_write": false, 00:13:09.998 "abort": true, 00:13:09.998 "seek_hole": false, 00:13:09.998 "seek_data": false, 00:13:09.998 "copy": true, 00:13:09.998 "nvme_iov_md": false 00:13:09.998 }, 00:13:09.998 "memory_domains": [ 00:13:09.998 { 00:13:09.998 "dma_device_id": "system", 00:13:09.998 "dma_device_type": 1 00:13:09.998 }, 00:13:09.998 { 00:13:09.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.998 "dma_device_type": 2 00:13:09.998 } 00:13:09.998 ], 00:13:09.998 "driver_specific": {} 00:13:09.998 } 00:13:09.998 ] 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.998 BaseBdev3 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.998 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.998 [ 00:13:09.998 { 00:13:09.998 "name": "BaseBdev3", 00:13:09.998 "aliases": [ 00:13:09.998 "6bea9f56-34d6-4cae-8400-d4a164d23119" 00:13:09.998 ], 00:13:09.998 "product_name": "Malloc disk", 00:13:09.998 "block_size": 512, 00:13:09.998 "num_blocks": 65536, 00:13:09.998 "uuid": "6bea9f56-34d6-4cae-8400-d4a164d23119", 00:13:09.998 "assigned_rate_limits": { 00:13:09.998 "rw_ios_per_sec": 0, 00:13:09.998 "rw_mbytes_per_sec": 0, 00:13:09.998 "r_mbytes_per_sec": 0, 00:13:09.998 "w_mbytes_per_sec": 0 00:13:09.998 }, 00:13:09.999 "claimed": false, 00:13:09.999 "zoned": false, 00:13:09.999 "supported_io_types": { 00:13:09.999 "read": true, 00:13:09.999 "write": true, 00:13:09.999 "unmap": true, 00:13:09.999 "flush": true, 00:13:09.999 "reset": true, 00:13:09.999 "nvme_admin": false, 00:13:09.999 "nvme_io": false, 00:13:09.999 "nvme_io_md": false, 00:13:09.999 "write_zeroes": true, 00:13:09.999 "zcopy": true, 00:13:09.999 "get_zone_info": false, 00:13:09.999 "zone_management": false, 00:13:09.999 "zone_append": false, 00:13:09.999 "compare": false, 00:13:09.999 "compare_and_write": false, 00:13:09.999 "abort": true, 00:13:09.999 "seek_hole": false, 00:13:09.999 "seek_data": false, 00:13:09.999 "copy": true, 00:13:09.999 "nvme_iov_md": false 00:13:09.999 }, 00:13:09.999 "memory_domains": [ 00:13:09.999 { 00:13:09.999 "dma_device_id": "system", 00:13:09.999 "dma_device_type": 1 00:13:09.999 }, 00:13:09.999 { 00:13:09.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.999 "dma_device_type": 2 00:13:09.999 } 00:13:09.999 ], 00:13:09.999 "driver_specific": {} 00:13:09.999 } 00:13:09.999 ] 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.999 [2024-11-28 16:26:01.694684] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:09.999 [2024-11-28 16:26:01.694771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:09.999 [2024-11-28 16:26:01.694828] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:09.999 [2024-11-28 16:26:01.696620] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.999 "name": "Existed_Raid", 00:13:09.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.999 "strip_size_kb": 64, 00:13:09.999 "state": "configuring", 00:13:09.999 "raid_level": "raid5f", 00:13:09.999 "superblock": false, 00:13:09.999 "num_base_bdevs": 3, 00:13:09.999 "num_base_bdevs_discovered": 2, 00:13:09.999 "num_base_bdevs_operational": 3, 00:13:09.999 "base_bdevs_list": [ 00:13:09.999 { 00:13:09.999 "name": "BaseBdev1", 00:13:09.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.999 "is_configured": false, 00:13:09.999 "data_offset": 0, 00:13:09.999 "data_size": 0 00:13:09.999 }, 00:13:09.999 { 00:13:09.999 "name": "BaseBdev2", 00:13:09.999 "uuid": "410aeef4-c728-417f-bf9d-0d33b0ef9a9d", 00:13:09.999 "is_configured": true, 00:13:09.999 "data_offset": 0, 00:13:09.999 "data_size": 65536 00:13:09.999 }, 00:13:09.999 { 00:13:09.999 "name": "BaseBdev3", 00:13:09.999 "uuid": "6bea9f56-34d6-4cae-8400-d4a164d23119", 00:13:09.999 "is_configured": true, 00:13:09.999 "data_offset": 0, 00:13:09.999 "data_size": 65536 00:13:09.999 } 00:13:09.999 ] 00:13:09.999 }' 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.999 16:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.569 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:10.569 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.569 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.569 [2024-11-28 16:26:02.149936] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:10.570 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.570 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:10.570 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.570 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.570 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:10.570 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.570 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:10.570 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.570 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.570 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.570 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.570 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.570 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.570 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.570 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.570 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.570 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.570 "name": "Existed_Raid", 00:13:10.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.570 "strip_size_kb": 64, 00:13:10.570 "state": "configuring", 00:13:10.570 "raid_level": "raid5f", 00:13:10.570 "superblock": false, 00:13:10.570 "num_base_bdevs": 3, 00:13:10.570 "num_base_bdevs_discovered": 1, 00:13:10.570 "num_base_bdevs_operational": 3, 00:13:10.570 "base_bdevs_list": [ 00:13:10.570 { 00:13:10.570 "name": "BaseBdev1", 00:13:10.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.570 "is_configured": false, 00:13:10.570 "data_offset": 0, 00:13:10.570 "data_size": 0 00:13:10.570 }, 00:13:10.570 { 00:13:10.570 "name": null, 00:13:10.570 "uuid": "410aeef4-c728-417f-bf9d-0d33b0ef9a9d", 00:13:10.570 "is_configured": false, 00:13:10.570 "data_offset": 0, 00:13:10.570 "data_size": 65536 00:13:10.570 }, 00:13:10.570 { 00:13:10.570 "name": "BaseBdev3", 00:13:10.570 "uuid": "6bea9f56-34d6-4cae-8400-d4a164d23119", 00:13:10.570 "is_configured": true, 00:13:10.570 "data_offset": 0, 00:13:10.570 "data_size": 65536 00:13:10.570 } 00:13:10.570 ] 00:13:10.570 }' 00:13:10.570 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.570 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.830 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.830 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:10.830 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.830 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.830 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.090 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.091 [2024-11-28 16:26:02.636210] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:11.091 BaseBdev1 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.091 [ 00:13:11.091 { 00:13:11.091 "name": "BaseBdev1", 00:13:11.091 "aliases": [ 00:13:11.091 "607bbb16-2b99-4bcf-a1a6-91b34bdc7e09" 00:13:11.091 ], 00:13:11.091 "product_name": "Malloc disk", 00:13:11.091 "block_size": 512, 00:13:11.091 "num_blocks": 65536, 00:13:11.091 "uuid": "607bbb16-2b99-4bcf-a1a6-91b34bdc7e09", 00:13:11.091 "assigned_rate_limits": { 00:13:11.091 "rw_ios_per_sec": 0, 00:13:11.091 "rw_mbytes_per_sec": 0, 00:13:11.091 "r_mbytes_per_sec": 0, 00:13:11.091 "w_mbytes_per_sec": 0 00:13:11.091 }, 00:13:11.091 "claimed": true, 00:13:11.091 "claim_type": "exclusive_write", 00:13:11.091 "zoned": false, 00:13:11.091 "supported_io_types": { 00:13:11.091 "read": true, 00:13:11.091 "write": true, 00:13:11.091 "unmap": true, 00:13:11.091 "flush": true, 00:13:11.091 "reset": true, 00:13:11.091 "nvme_admin": false, 00:13:11.091 "nvme_io": false, 00:13:11.091 "nvme_io_md": false, 00:13:11.091 "write_zeroes": true, 00:13:11.091 "zcopy": true, 00:13:11.091 "get_zone_info": false, 00:13:11.091 "zone_management": false, 00:13:11.091 "zone_append": false, 00:13:11.091 "compare": false, 00:13:11.091 "compare_and_write": false, 00:13:11.091 "abort": true, 00:13:11.091 "seek_hole": false, 00:13:11.091 "seek_data": false, 00:13:11.091 "copy": true, 00:13:11.091 "nvme_iov_md": false 00:13:11.091 }, 00:13:11.091 "memory_domains": [ 00:13:11.091 { 00:13:11.091 "dma_device_id": "system", 00:13:11.091 "dma_device_type": 1 00:13:11.091 }, 00:13:11.091 { 00:13:11.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.091 "dma_device_type": 2 00:13:11.091 } 00:13:11.091 ], 00:13:11.091 "driver_specific": {} 00:13:11.091 } 00:13:11.091 ] 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.091 "name": "Existed_Raid", 00:13:11.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.091 "strip_size_kb": 64, 00:13:11.091 "state": "configuring", 00:13:11.091 "raid_level": "raid5f", 00:13:11.091 "superblock": false, 00:13:11.091 "num_base_bdevs": 3, 00:13:11.091 "num_base_bdevs_discovered": 2, 00:13:11.091 "num_base_bdevs_operational": 3, 00:13:11.091 "base_bdevs_list": [ 00:13:11.091 { 00:13:11.091 "name": "BaseBdev1", 00:13:11.091 "uuid": "607bbb16-2b99-4bcf-a1a6-91b34bdc7e09", 00:13:11.091 "is_configured": true, 00:13:11.091 "data_offset": 0, 00:13:11.091 "data_size": 65536 00:13:11.091 }, 00:13:11.091 { 00:13:11.091 "name": null, 00:13:11.091 "uuid": "410aeef4-c728-417f-bf9d-0d33b0ef9a9d", 00:13:11.091 "is_configured": false, 00:13:11.091 "data_offset": 0, 00:13:11.091 "data_size": 65536 00:13:11.091 }, 00:13:11.091 { 00:13:11.091 "name": "BaseBdev3", 00:13:11.091 "uuid": "6bea9f56-34d6-4cae-8400-d4a164d23119", 00:13:11.091 "is_configured": true, 00:13:11.091 "data_offset": 0, 00:13:11.091 "data_size": 65536 00:13:11.091 } 00:13:11.091 ] 00:13:11.091 }' 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.091 16:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.661 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.661 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:11.661 16:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.661 16:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.661 16:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.661 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:11.661 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:11.661 16:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.661 16:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.661 [2024-11-28 16:26:03.215287] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:11.661 16:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.661 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:11.661 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.661 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.661 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:11.661 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.661 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.661 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.661 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.661 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.662 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.662 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.662 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.662 16:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.662 16:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.662 16:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.662 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.662 "name": "Existed_Raid", 00:13:11.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.662 "strip_size_kb": 64, 00:13:11.662 "state": "configuring", 00:13:11.662 "raid_level": "raid5f", 00:13:11.662 "superblock": false, 00:13:11.662 "num_base_bdevs": 3, 00:13:11.662 "num_base_bdevs_discovered": 1, 00:13:11.662 "num_base_bdevs_operational": 3, 00:13:11.662 "base_bdevs_list": [ 00:13:11.662 { 00:13:11.662 "name": "BaseBdev1", 00:13:11.662 "uuid": "607bbb16-2b99-4bcf-a1a6-91b34bdc7e09", 00:13:11.662 "is_configured": true, 00:13:11.662 "data_offset": 0, 00:13:11.662 "data_size": 65536 00:13:11.662 }, 00:13:11.662 { 00:13:11.662 "name": null, 00:13:11.662 "uuid": "410aeef4-c728-417f-bf9d-0d33b0ef9a9d", 00:13:11.662 "is_configured": false, 00:13:11.662 "data_offset": 0, 00:13:11.662 "data_size": 65536 00:13:11.662 }, 00:13:11.662 { 00:13:11.662 "name": null, 00:13:11.662 "uuid": "6bea9f56-34d6-4cae-8400-d4a164d23119", 00:13:11.662 "is_configured": false, 00:13:11.662 "data_offset": 0, 00:13:11.662 "data_size": 65536 00:13:11.662 } 00:13:11.662 ] 00:13:11.662 }' 00:13:11.662 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.662 16:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.921 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.921 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:11.921 16:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.921 16:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.921 16:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.921 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:11.921 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:11.921 16:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.921 16:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.181 [2024-11-28 16:26:03.698565] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:12.181 16:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.181 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:12.181 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.182 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.182 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:12.182 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.182 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.182 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.182 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.182 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.182 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.182 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.182 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.182 16:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.182 16:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.182 16:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.182 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.182 "name": "Existed_Raid", 00:13:12.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.182 "strip_size_kb": 64, 00:13:12.182 "state": "configuring", 00:13:12.182 "raid_level": "raid5f", 00:13:12.182 "superblock": false, 00:13:12.182 "num_base_bdevs": 3, 00:13:12.182 "num_base_bdevs_discovered": 2, 00:13:12.182 "num_base_bdevs_operational": 3, 00:13:12.182 "base_bdevs_list": [ 00:13:12.182 { 00:13:12.182 "name": "BaseBdev1", 00:13:12.182 "uuid": "607bbb16-2b99-4bcf-a1a6-91b34bdc7e09", 00:13:12.182 "is_configured": true, 00:13:12.182 "data_offset": 0, 00:13:12.182 "data_size": 65536 00:13:12.182 }, 00:13:12.182 { 00:13:12.182 "name": null, 00:13:12.182 "uuid": "410aeef4-c728-417f-bf9d-0d33b0ef9a9d", 00:13:12.182 "is_configured": false, 00:13:12.182 "data_offset": 0, 00:13:12.182 "data_size": 65536 00:13:12.182 }, 00:13:12.182 { 00:13:12.182 "name": "BaseBdev3", 00:13:12.182 "uuid": "6bea9f56-34d6-4cae-8400-d4a164d23119", 00:13:12.182 "is_configured": true, 00:13:12.182 "data_offset": 0, 00:13:12.182 "data_size": 65536 00:13:12.182 } 00:13:12.182 ] 00:13:12.182 }' 00:13:12.182 16:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.182 16:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.442 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.442 16:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.442 16:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.442 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:12.442 16:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.442 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:12.442 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:12.442 16:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.442 16:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.702 [2024-11-28 16:26:04.213679] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:12.702 16:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.702 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:12.702 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.702 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.702 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:12.702 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.702 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.702 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.702 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.702 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.702 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.702 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.702 16:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.702 16:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.702 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.702 16:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.702 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.702 "name": "Existed_Raid", 00:13:12.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.702 "strip_size_kb": 64, 00:13:12.702 "state": "configuring", 00:13:12.702 "raid_level": "raid5f", 00:13:12.702 "superblock": false, 00:13:12.702 "num_base_bdevs": 3, 00:13:12.702 "num_base_bdevs_discovered": 1, 00:13:12.702 "num_base_bdevs_operational": 3, 00:13:12.702 "base_bdevs_list": [ 00:13:12.702 { 00:13:12.702 "name": null, 00:13:12.702 "uuid": "607bbb16-2b99-4bcf-a1a6-91b34bdc7e09", 00:13:12.702 "is_configured": false, 00:13:12.702 "data_offset": 0, 00:13:12.702 "data_size": 65536 00:13:12.702 }, 00:13:12.702 { 00:13:12.702 "name": null, 00:13:12.702 "uuid": "410aeef4-c728-417f-bf9d-0d33b0ef9a9d", 00:13:12.702 "is_configured": false, 00:13:12.702 "data_offset": 0, 00:13:12.702 "data_size": 65536 00:13:12.702 }, 00:13:12.702 { 00:13:12.702 "name": "BaseBdev3", 00:13:12.702 "uuid": "6bea9f56-34d6-4cae-8400-d4a164d23119", 00:13:12.702 "is_configured": true, 00:13:12.702 "data_offset": 0, 00:13:12.702 "data_size": 65536 00:13:12.702 } 00:13:12.702 ] 00:13:12.702 }' 00:13:12.702 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.702 16:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.963 [2024-11-28 16:26:04.711722] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.963 16:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.223 16:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.223 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.223 "name": "Existed_Raid", 00:13:13.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.223 "strip_size_kb": 64, 00:13:13.223 "state": "configuring", 00:13:13.223 "raid_level": "raid5f", 00:13:13.223 "superblock": false, 00:13:13.223 "num_base_bdevs": 3, 00:13:13.223 "num_base_bdevs_discovered": 2, 00:13:13.223 "num_base_bdevs_operational": 3, 00:13:13.223 "base_bdevs_list": [ 00:13:13.223 { 00:13:13.223 "name": null, 00:13:13.223 "uuid": "607bbb16-2b99-4bcf-a1a6-91b34bdc7e09", 00:13:13.223 "is_configured": false, 00:13:13.223 "data_offset": 0, 00:13:13.223 "data_size": 65536 00:13:13.223 }, 00:13:13.223 { 00:13:13.223 "name": "BaseBdev2", 00:13:13.223 "uuid": "410aeef4-c728-417f-bf9d-0d33b0ef9a9d", 00:13:13.223 "is_configured": true, 00:13:13.223 "data_offset": 0, 00:13:13.223 "data_size": 65536 00:13:13.223 }, 00:13:13.223 { 00:13:13.223 "name": "BaseBdev3", 00:13:13.223 "uuid": "6bea9f56-34d6-4cae-8400-d4a164d23119", 00:13:13.223 "is_configured": true, 00:13:13.223 "data_offset": 0, 00:13:13.223 "data_size": 65536 00:13:13.223 } 00:13:13.223 ] 00:13:13.223 }' 00:13:13.223 16:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.223 16:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 607bbb16-2b99-4bcf-a1a6-91b34bdc7e09 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.482 [2024-11-28 16:26:05.233791] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:13.482 [2024-11-28 16:26:05.233859] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:13.482 [2024-11-28 16:26:05.233873] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:13.482 [2024-11-28 16:26:05.234115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:13.482 [2024-11-28 16:26:05.234560] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:13.482 [2024-11-28 16:26:05.234580] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:13:13.482 [2024-11-28 16:26:05.234756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.482 NewBaseBdev 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.482 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.741 [ 00:13:13.741 { 00:13:13.741 "name": "NewBaseBdev", 00:13:13.741 "aliases": [ 00:13:13.741 "607bbb16-2b99-4bcf-a1a6-91b34bdc7e09" 00:13:13.741 ], 00:13:13.741 "product_name": "Malloc disk", 00:13:13.741 "block_size": 512, 00:13:13.741 "num_blocks": 65536, 00:13:13.741 "uuid": "607bbb16-2b99-4bcf-a1a6-91b34bdc7e09", 00:13:13.741 "assigned_rate_limits": { 00:13:13.741 "rw_ios_per_sec": 0, 00:13:13.741 "rw_mbytes_per_sec": 0, 00:13:13.741 "r_mbytes_per_sec": 0, 00:13:13.741 "w_mbytes_per_sec": 0 00:13:13.741 }, 00:13:13.741 "claimed": true, 00:13:13.741 "claim_type": "exclusive_write", 00:13:13.741 "zoned": false, 00:13:13.741 "supported_io_types": { 00:13:13.741 "read": true, 00:13:13.741 "write": true, 00:13:13.741 "unmap": true, 00:13:13.741 "flush": true, 00:13:13.741 "reset": true, 00:13:13.741 "nvme_admin": false, 00:13:13.741 "nvme_io": false, 00:13:13.741 "nvme_io_md": false, 00:13:13.741 "write_zeroes": true, 00:13:13.741 "zcopy": true, 00:13:13.741 "get_zone_info": false, 00:13:13.741 "zone_management": false, 00:13:13.741 "zone_append": false, 00:13:13.741 "compare": false, 00:13:13.741 "compare_and_write": false, 00:13:13.741 "abort": true, 00:13:13.741 "seek_hole": false, 00:13:13.741 "seek_data": false, 00:13:13.741 "copy": true, 00:13:13.741 "nvme_iov_md": false 00:13:13.741 }, 00:13:13.741 "memory_domains": [ 00:13:13.741 { 00:13:13.741 "dma_device_id": "system", 00:13:13.741 "dma_device_type": 1 00:13:13.741 }, 00:13:13.741 { 00:13:13.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.741 "dma_device_type": 2 00:13:13.741 } 00:13:13.741 ], 00:13:13.741 "driver_specific": {} 00:13:13.741 } 00:13:13.741 ] 00:13:13.741 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.741 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:13.741 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:13.741 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.741 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.741 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:13.741 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.741 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.741 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.741 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.741 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.741 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.741 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.741 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.741 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.741 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.741 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.741 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.741 "name": "Existed_Raid", 00:13:13.741 "uuid": "8726bec0-12d1-4178-9380-72f0d8f58193", 00:13:13.741 "strip_size_kb": 64, 00:13:13.741 "state": "online", 00:13:13.741 "raid_level": "raid5f", 00:13:13.741 "superblock": false, 00:13:13.741 "num_base_bdevs": 3, 00:13:13.741 "num_base_bdevs_discovered": 3, 00:13:13.741 "num_base_bdevs_operational": 3, 00:13:13.741 "base_bdevs_list": [ 00:13:13.741 { 00:13:13.741 "name": "NewBaseBdev", 00:13:13.741 "uuid": "607bbb16-2b99-4bcf-a1a6-91b34bdc7e09", 00:13:13.741 "is_configured": true, 00:13:13.741 "data_offset": 0, 00:13:13.741 "data_size": 65536 00:13:13.741 }, 00:13:13.741 { 00:13:13.741 "name": "BaseBdev2", 00:13:13.741 "uuid": "410aeef4-c728-417f-bf9d-0d33b0ef9a9d", 00:13:13.741 "is_configured": true, 00:13:13.741 "data_offset": 0, 00:13:13.741 "data_size": 65536 00:13:13.741 }, 00:13:13.741 { 00:13:13.741 "name": "BaseBdev3", 00:13:13.741 "uuid": "6bea9f56-34d6-4cae-8400-d4a164d23119", 00:13:13.741 "is_configured": true, 00:13:13.741 "data_offset": 0, 00:13:13.741 "data_size": 65536 00:13:13.741 } 00:13:13.741 ] 00:13:13.741 }' 00:13:13.741 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.741 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.002 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:14.002 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:14.002 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:14.002 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:14.002 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:14.002 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:14.002 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:14.002 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:14.002 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.002 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.002 [2024-11-28 16:26:05.741106] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.002 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:14.263 "name": "Existed_Raid", 00:13:14.263 "aliases": [ 00:13:14.263 "8726bec0-12d1-4178-9380-72f0d8f58193" 00:13:14.263 ], 00:13:14.263 "product_name": "Raid Volume", 00:13:14.263 "block_size": 512, 00:13:14.263 "num_blocks": 131072, 00:13:14.263 "uuid": "8726bec0-12d1-4178-9380-72f0d8f58193", 00:13:14.263 "assigned_rate_limits": { 00:13:14.263 "rw_ios_per_sec": 0, 00:13:14.263 "rw_mbytes_per_sec": 0, 00:13:14.263 "r_mbytes_per_sec": 0, 00:13:14.263 "w_mbytes_per_sec": 0 00:13:14.263 }, 00:13:14.263 "claimed": false, 00:13:14.263 "zoned": false, 00:13:14.263 "supported_io_types": { 00:13:14.263 "read": true, 00:13:14.263 "write": true, 00:13:14.263 "unmap": false, 00:13:14.263 "flush": false, 00:13:14.263 "reset": true, 00:13:14.263 "nvme_admin": false, 00:13:14.263 "nvme_io": false, 00:13:14.263 "nvme_io_md": false, 00:13:14.263 "write_zeroes": true, 00:13:14.263 "zcopy": false, 00:13:14.263 "get_zone_info": false, 00:13:14.263 "zone_management": false, 00:13:14.263 "zone_append": false, 00:13:14.263 "compare": false, 00:13:14.263 "compare_and_write": false, 00:13:14.263 "abort": false, 00:13:14.263 "seek_hole": false, 00:13:14.263 "seek_data": false, 00:13:14.263 "copy": false, 00:13:14.263 "nvme_iov_md": false 00:13:14.263 }, 00:13:14.263 "driver_specific": { 00:13:14.263 "raid": { 00:13:14.263 "uuid": "8726bec0-12d1-4178-9380-72f0d8f58193", 00:13:14.263 "strip_size_kb": 64, 00:13:14.263 "state": "online", 00:13:14.263 "raid_level": "raid5f", 00:13:14.263 "superblock": false, 00:13:14.263 "num_base_bdevs": 3, 00:13:14.263 "num_base_bdevs_discovered": 3, 00:13:14.263 "num_base_bdevs_operational": 3, 00:13:14.263 "base_bdevs_list": [ 00:13:14.263 { 00:13:14.263 "name": "NewBaseBdev", 00:13:14.263 "uuid": "607bbb16-2b99-4bcf-a1a6-91b34bdc7e09", 00:13:14.263 "is_configured": true, 00:13:14.263 "data_offset": 0, 00:13:14.263 "data_size": 65536 00:13:14.263 }, 00:13:14.263 { 00:13:14.263 "name": "BaseBdev2", 00:13:14.263 "uuid": "410aeef4-c728-417f-bf9d-0d33b0ef9a9d", 00:13:14.263 "is_configured": true, 00:13:14.263 "data_offset": 0, 00:13:14.263 "data_size": 65536 00:13:14.263 }, 00:13:14.263 { 00:13:14.263 "name": "BaseBdev3", 00:13:14.263 "uuid": "6bea9f56-34d6-4cae-8400-d4a164d23119", 00:13:14.263 "is_configured": true, 00:13:14.263 "data_offset": 0, 00:13:14.263 "data_size": 65536 00:13:14.263 } 00:13:14.263 ] 00:13:14.263 } 00:13:14.263 } 00:13:14.263 }' 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:14.263 BaseBdev2 00:13:14.263 BaseBdev3' 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.263 [2024-11-28 16:26:05.996510] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:14.263 [2024-11-28 16:26:05.996577] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:14.263 [2024-11-28 16:26:05.996658] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.263 [2024-11-28 16:26:05.996950] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:14.263 [2024-11-28 16:26:05.997013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.263 16:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 90459 00:13:14.263 16:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 90459 ']' 00:13:14.263 16:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 90459 00:13:14.263 16:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:14.263 16:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.263 16:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90459 00:13:14.523 16:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:14.523 killing process with pid 90459 00:13:14.523 16:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:14.523 16:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90459' 00:13:14.523 16:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 90459 00:13:14.523 [2024-11-28 16:26:06.046455] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:14.523 16:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 90459 00:13:14.523 [2024-11-28 16:26:06.077579] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:14.784 16:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:14.784 00:13:14.784 real 0m8.967s 00:13:14.784 user 0m15.263s 00:13:14.784 sys 0m1.878s 00:13:14.784 16:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:14.784 ************************************ 00:13:14.784 END TEST raid5f_state_function_test 00:13:14.784 ************************************ 00:13:14.784 16:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.784 16:26:06 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:13:14.784 16:26:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:14.784 16:26:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:14.784 16:26:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:14.784 ************************************ 00:13:14.784 START TEST raid5f_state_function_test_sb 00:13:14.784 ************************************ 00:13:14.784 16:26:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:13:14.784 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:14.784 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:14.784 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:14.784 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:14.784 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:14.784 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:14.784 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:14.784 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=91064 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:14.785 Process raid pid: 91064 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 91064' 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 91064 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 91064 ']' 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:14.785 16:26:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.785 [2024-11-28 16:26:06.491145] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:14.785 [2024-11-28 16:26:06.491336] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.045 [2024-11-28 16:26:06.658549] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.045 [2024-11-28 16:26:06.706051] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.045 [2024-11-28 16:26:06.749617] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.045 [2024-11-28 16:26:06.749653] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.615 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:15.616 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:15.616 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:15.616 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.616 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.616 [2024-11-28 16:26:07.307923] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:15.616 [2024-11-28 16:26:07.307974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:15.616 [2024-11-28 16:26:07.307988] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:15.616 [2024-11-28 16:26:07.307997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:15.616 [2024-11-28 16:26:07.308002] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:15.616 [2024-11-28 16:26:07.308015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:15.616 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.616 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:15.616 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.616 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.616 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:15.616 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.616 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:15.616 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.616 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.616 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.616 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.616 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.616 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.616 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.616 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.616 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.616 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.616 "name": "Existed_Raid", 00:13:15.616 "uuid": "2a4174c1-11a3-40a5-8793-d7cc8eea6db8", 00:13:15.616 "strip_size_kb": 64, 00:13:15.616 "state": "configuring", 00:13:15.616 "raid_level": "raid5f", 00:13:15.616 "superblock": true, 00:13:15.616 "num_base_bdevs": 3, 00:13:15.616 "num_base_bdevs_discovered": 0, 00:13:15.616 "num_base_bdevs_operational": 3, 00:13:15.616 "base_bdevs_list": [ 00:13:15.616 { 00:13:15.616 "name": "BaseBdev1", 00:13:15.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.616 "is_configured": false, 00:13:15.616 "data_offset": 0, 00:13:15.616 "data_size": 0 00:13:15.616 }, 00:13:15.616 { 00:13:15.616 "name": "BaseBdev2", 00:13:15.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.616 "is_configured": false, 00:13:15.616 "data_offset": 0, 00:13:15.616 "data_size": 0 00:13:15.616 }, 00:13:15.616 { 00:13:15.616 "name": "BaseBdev3", 00:13:15.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.616 "is_configured": false, 00:13:15.616 "data_offset": 0, 00:13:15.616 "data_size": 0 00:13:15.616 } 00:13:15.616 ] 00:13:15.616 }' 00:13:15.616 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.616 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.186 [2024-11-28 16:26:07.743044] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:16.186 [2024-11-28 16:26:07.743081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.186 [2024-11-28 16:26:07.755052] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:16.186 [2024-11-28 16:26:07.755140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:16.186 [2024-11-28 16:26:07.755153] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:16.186 [2024-11-28 16:26:07.755166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:16.186 [2024-11-28 16:26:07.755191] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:16.186 [2024-11-28 16:26:07.755203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.186 [2024-11-28 16:26:07.776077] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.186 BaseBdev1 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.186 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.186 [ 00:13:16.186 { 00:13:16.186 "name": "BaseBdev1", 00:13:16.186 "aliases": [ 00:13:16.186 "93e7fe6b-da5d-453c-9684-cf0c67f45d0d" 00:13:16.186 ], 00:13:16.186 "product_name": "Malloc disk", 00:13:16.186 "block_size": 512, 00:13:16.186 "num_blocks": 65536, 00:13:16.186 "uuid": "93e7fe6b-da5d-453c-9684-cf0c67f45d0d", 00:13:16.186 "assigned_rate_limits": { 00:13:16.187 "rw_ios_per_sec": 0, 00:13:16.187 "rw_mbytes_per_sec": 0, 00:13:16.187 "r_mbytes_per_sec": 0, 00:13:16.187 "w_mbytes_per_sec": 0 00:13:16.187 }, 00:13:16.187 "claimed": true, 00:13:16.187 "claim_type": "exclusive_write", 00:13:16.187 "zoned": false, 00:13:16.187 "supported_io_types": { 00:13:16.187 "read": true, 00:13:16.187 "write": true, 00:13:16.187 "unmap": true, 00:13:16.187 "flush": true, 00:13:16.187 "reset": true, 00:13:16.187 "nvme_admin": false, 00:13:16.187 "nvme_io": false, 00:13:16.187 "nvme_io_md": false, 00:13:16.187 "write_zeroes": true, 00:13:16.187 "zcopy": true, 00:13:16.187 "get_zone_info": false, 00:13:16.187 "zone_management": false, 00:13:16.187 "zone_append": false, 00:13:16.187 "compare": false, 00:13:16.187 "compare_and_write": false, 00:13:16.187 "abort": true, 00:13:16.187 "seek_hole": false, 00:13:16.187 "seek_data": false, 00:13:16.187 "copy": true, 00:13:16.187 "nvme_iov_md": false 00:13:16.187 }, 00:13:16.187 "memory_domains": [ 00:13:16.187 { 00:13:16.187 "dma_device_id": "system", 00:13:16.187 "dma_device_type": 1 00:13:16.187 }, 00:13:16.187 { 00:13:16.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.187 "dma_device_type": 2 00:13:16.187 } 00:13:16.187 ], 00:13:16.187 "driver_specific": {} 00:13:16.187 } 00:13:16.187 ] 00:13:16.187 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.187 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:16.187 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:16.187 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.187 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.187 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:16.187 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.187 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.187 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.187 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.187 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.187 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.187 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.187 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.187 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.187 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.187 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.187 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.187 "name": "Existed_Raid", 00:13:16.187 "uuid": "c33e19f2-5439-4278-8493-f367729b11ee", 00:13:16.187 "strip_size_kb": 64, 00:13:16.187 "state": "configuring", 00:13:16.187 "raid_level": "raid5f", 00:13:16.187 "superblock": true, 00:13:16.187 "num_base_bdevs": 3, 00:13:16.187 "num_base_bdevs_discovered": 1, 00:13:16.187 "num_base_bdevs_operational": 3, 00:13:16.187 "base_bdevs_list": [ 00:13:16.187 { 00:13:16.187 "name": "BaseBdev1", 00:13:16.187 "uuid": "93e7fe6b-da5d-453c-9684-cf0c67f45d0d", 00:13:16.187 "is_configured": true, 00:13:16.187 "data_offset": 2048, 00:13:16.187 "data_size": 63488 00:13:16.187 }, 00:13:16.187 { 00:13:16.187 "name": "BaseBdev2", 00:13:16.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.187 "is_configured": false, 00:13:16.187 "data_offset": 0, 00:13:16.187 "data_size": 0 00:13:16.187 }, 00:13:16.187 { 00:13:16.187 "name": "BaseBdev3", 00:13:16.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.187 "is_configured": false, 00:13:16.187 "data_offset": 0, 00:13:16.187 "data_size": 0 00:13:16.187 } 00:13:16.187 ] 00:13:16.187 }' 00:13:16.187 16:26:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.187 16:26:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.757 [2024-11-28 16:26:08.259874] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:16.757 [2024-11-28 16:26:08.259920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.757 [2024-11-28 16:26:08.271921] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.757 [2024-11-28 16:26:08.273668] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:16.757 [2024-11-28 16:26:08.273758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:16.757 [2024-11-28 16:26:08.273772] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:16.757 [2024-11-28 16:26:08.273785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.757 "name": "Existed_Raid", 00:13:16.757 "uuid": "44a4987e-334b-473a-872d-9cb635c70924", 00:13:16.757 "strip_size_kb": 64, 00:13:16.757 "state": "configuring", 00:13:16.757 "raid_level": "raid5f", 00:13:16.757 "superblock": true, 00:13:16.757 "num_base_bdevs": 3, 00:13:16.757 "num_base_bdevs_discovered": 1, 00:13:16.757 "num_base_bdevs_operational": 3, 00:13:16.757 "base_bdevs_list": [ 00:13:16.757 { 00:13:16.757 "name": "BaseBdev1", 00:13:16.757 "uuid": "93e7fe6b-da5d-453c-9684-cf0c67f45d0d", 00:13:16.757 "is_configured": true, 00:13:16.757 "data_offset": 2048, 00:13:16.757 "data_size": 63488 00:13:16.757 }, 00:13:16.757 { 00:13:16.757 "name": "BaseBdev2", 00:13:16.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.757 "is_configured": false, 00:13:16.757 "data_offset": 0, 00:13:16.757 "data_size": 0 00:13:16.757 }, 00:13:16.757 { 00:13:16.757 "name": "BaseBdev3", 00:13:16.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.757 "is_configured": false, 00:13:16.757 "data_offset": 0, 00:13:16.757 "data_size": 0 00:13:16.757 } 00:13:16.757 ] 00:13:16.757 }' 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.757 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.017 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:17.017 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.017 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.017 [2024-11-28 16:26:08.748780] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.017 BaseBdev2 00:13:17.017 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.017 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:17.017 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:17.017 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:17.017 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:17.017 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:17.017 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:17.017 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:17.017 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.017 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.017 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.017 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:17.017 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.017 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.017 [ 00:13:17.017 { 00:13:17.017 "name": "BaseBdev2", 00:13:17.017 "aliases": [ 00:13:17.017 "1bba5717-ff5f-417b-8f42-9f98e56ec9c3" 00:13:17.017 ], 00:13:17.018 "product_name": "Malloc disk", 00:13:17.018 "block_size": 512, 00:13:17.018 "num_blocks": 65536, 00:13:17.018 "uuid": "1bba5717-ff5f-417b-8f42-9f98e56ec9c3", 00:13:17.018 "assigned_rate_limits": { 00:13:17.018 "rw_ios_per_sec": 0, 00:13:17.018 "rw_mbytes_per_sec": 0, 00:13:17.018 "r_mbytes_per_sec": 0, 00:13:17.018 "w_mbytes_per_sec": 0 00:13:17.018 }, 00:13:17.018 "claimed": true, 00:13:17.018 "claim_type": "exclusive_write", 00:13:17.018 "zoned": false, 00:13:17.018 "supported_io_types": { 00:13:17.018 "read": true, 00:13:17.018 "write": true, 00:13:17.018 "unmap": true, 00:13:17.018 "flush": true, 00:13:17.018 "reset": true, 00:13:17.018 "nvme_admin": false, 00:13:17.018 "nvme_io": false, 00:13:17.018 "nvme_io_md": false, 00:13:17.018 "write_zeroes": true, 00:13:17.018 "zcopy": true, 00:13:17.018 "get_zone_info": false, 00:13:17.018 "zone_management": false, 00:13:17.018 "zone_append": false, 00:13:17.018 "compare": false, 00:13:17.018 "compare_and_write": false, 00:13:17.018 "abort": true, 00:13:17.018 "seek_hole": false, 00:13:17.018 "seek_data": false, 00:13:17.018 "copy": true, 00:13:17.018 "nvme_iov_md": false 00:13:17.018 }, 00:13:17.018 "memory_domains": [ 00:13:17.018 { 00:13:17.018 "dma_device_id": "system", 00:13:17.277 "dma_device_type": 1 00:13:17.277 }, 00:13:17.277 { 00:13:17.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.277 "dma_device_type": 2 00:13:17.277 } 00:13:17.277 ], 00:13:17.277 "driver_specific": {} 00:13:17.277 } 00:13:17.277 ] 00:13:17.277 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.277 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:17.277 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:17.277 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:17.277 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:17.277 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.277 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.277 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:17.277 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.277 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.277 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.277 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.277 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.277 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.277 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.277 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.277 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.277 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.277 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.277 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.277 "name": "Existed_Raid", 00:13:17.277 "uuid": "44a4987e-334b-473a-872d-9cb635c70924", 00:13:17.277 "strip_size_kb": 64, 00:13:17.277 "state": "configuring", 00:13:17.277 "raid_level": "raid5f", 00:13:17.277 "superblock": true, 00:13:17.277 "num_base_bdevs": 3, 00:13:17.277 "num_base_bdevs_discovered": 2, 00:13:17.278 "num_base_bdevs_operational": 3, 00:13:17.278 "base_bdevs_list": [ 00:13:17.278 { 00:13:17.278 "name": "BaseBdev1", 00:13:17.278 "uuid": "93e7fe6b-da5d-453c-9684-cf0c67f45d0d", 00:13:17.278 "is_configured": true, 00:13:17.278 "data_offset": 2048, 00:13:17.278 "data_size": 63488 00:13:17.278 }, 00:13:17.278 { 00:13:17.278 "name": "BaseBdev2", 00:13:17.278 "uuid": "1bba5717-ff5f-417b-8f42-9f98e56ec9c3", 00:13:17.278 "is_configured": true, 00:13:17.278 "data_offset": 2048, 00:13:17.278 "data_size": 63488 00:13:17.278 }, 00:13:17.278 { 00:13:17.278 "name": "BaseBdev3", 00:13:17.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.278 "is_configured": false, 00:13:17.278 "data_offset": 0, 00:13:17.278 "data_size": 0 00:13:17.278 } 00:13:17.278 ] 00:13:17.278 }' 00:13:17.278 16:26:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.278 16:26:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.537 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:17.537 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.537 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.537 BaseBdev3 00:13:17.537 [2024-11-28 16:26:09.287015] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:17.537 [2024-11-28 16:26:09.287213] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:17.537 [2024-11-28 16:26:09.287234] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:17.537 [2024-11-28 16:26:09.287502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:17.537 [2024-11-28 16:26:09.287970] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:17.538 [2024-11-28 16:26:09.287992] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:17.538 [2024-11-28 16:26:09.288150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.538 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.538 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:17.538 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:17.538 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:17.538 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:17.538 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:17.538 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:17.538 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:17.538 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.538 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.538 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.538 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:17.538 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.538 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.798 [ 00:13:17.798 { 00:13:17.798 "name": "BaseBdev3", 00:13:17.798 "aliases": [ 00:13:17.798 "24e7ed08-05aa-463a-989d-af6876f034bc" 00:13:17.798 ], 00:13:17.798 "product_name": "Malloc disk", 00:13:17.798 "block_size": 512, 00:13:17.798 "num_blocks": 65536, 00:13:17.798 "uuid": "24e7ed08-05aa-463a-989d-af6876f034bc", 00:13:17.798 "assigned_rate_limits": { 00:13:17.798 "rw_ios_per_sec": 0, 00:13:17.798 "rw_mbytes_per_sec": 0, 00:13:17.798 "r_mbytes_per_sec": 0, 00:13:17.798 "w_mbytes_per_sec": 0 00:13:17.798 }, 00:13:17.798 "claimed": true, 00:13:17.798 "claim_type": "exclusive_write", 00:13:17.798 "zoned": false, 00:13:17.798 "supported_io_types": { 00:13:17.798 "read": true, 00:13:17.798 "write": true, 00:13:17.798 "unmap": true, 00:13:17.798 "flush": true, 00:13:17.798 "reset": true, 00:13:17.798 "nvme_admin": false, 00:13:17.798 "nvme_io": false, 00:13:17.798 "nvme_io_md": false, 00:13:17.798 "write_zeroes": true, 00:13:17.798 "zcopy": true, 00:13:17.798 "get_zone_info": false, 00:13:17.798 "zone_management": false, 00:13:17.798 "zone_append": false, 00:13:17.798 "compare": false, 00:13:17.798 "compare_and_write": false, 00:13:17.798 "abort": true, 00:13:17.798 "seek_hole": false, 00:13:17.798 "seek_data": false, 00:13:17.798 "copy": true, 00:13:17.798 "nvme_iov_md": false 00:13:17.798 }, 00:13:17.798 "memory_domains": [ 00:13:17.798 { 00:13:17.798 "dma_device_id": "system", 00:13:17.798 "dma_device_type": 1 00:13:17.798 }, 00:13:17.798 { 00:13:17.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.798 "dma_device_type": 2 00:13:17.798 } 00:13:17.798 ], 00:13:17.798 "driver_specific": {} 00:13:17.798 } 00:13:17.798 ] 00:13:17.798 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.798 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:17.798 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:17.798 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:17.798 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:17.798 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.798 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.798 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:17.798 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.798 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.798 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.798 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.798 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.798 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.798 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.798 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.798 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.798 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.798 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.798 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.798 "name": "Existed_Raid", 00:13:17.798 "uuid": "44a4987e-334b-473a-872d-9cb635c70924", 00:13:17.798 "strip_size_kb": 64, 00:13:17.798 "state": "online", 00:13:17.798 "raid_level": "raid5f", 00:13:17.798 "superblock": true, 00:13:17.798 "num_base_bdevs": 3, 00:13:17.798 "num_base_bdevs_discovered": 3, 00:13:17.798 "num_base_bdevs_operational": 3, 00:13:17.798 "base_bdevs_list": [ 00:13:17.798 { 00:13:17.798 "name": "BaseBdev1", 00:13:17.798 "uuid": "93e7fe6b-da5d-453c-9684-cf0c67f45d0d", 00:13:17.798 "is_configured": true, 00:13:17.798 "data_offset": 2048, 00:13:17.798 "data_size": 63488 00:13:17.798 }, 00:13:17.798 { 00:13:17.798 "name": "BaseBdev2", 00:13:17.798 "uuid": "1bba5717-ff5f-417b-8f42-9f98e56ec9c3", 00:13:17.798 "is_configured": true, 00:13:17.798 "data_offset": 2048, 00:13:17.798 "data_size": 63488 00:13:17.798 }, 00:13:17.798 { 00:13:17.798 "name": "BaseBdev3", 00:13:17.798 "uuid": "24e7ed08-05aa-463a-989d-af6876f034bc", 00:13:17.798 "is_configured": true, 00:13:17.798 "data_offset": 2048, 00:13:17.798 "data_size": 63488 00:13:17.798 } 00:13:17.798 ] 00:13:17.798 }' 00:13:17.798 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.798 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.059 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:18.059 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:18.059 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:18.059 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:18.059 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:18.059 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:18.059 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:18.059 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:18.059 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.059 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.059 [2024-11-28 16:26:09.794324] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:18.059 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.059 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:18.059 "name": "Existed_Raid", 00:13:18.059 "aliases": [ 00:13:18.059 "44a4987e-334b-473a-872d-9cb635c70924" 00:13:18.059 ], 00:13:18.059 "product_name": "Raid Volume", 00:13:18.059 "block_size": 512, 00:13:18.059 "num_blocks": 126976, 00:13:18.059 "uuid": "44a4987e-334b-473a-872d-9cb635c70924", 00:13:18.059 "assigned_rate_limits": { 00:13:18.059 "rw_ios_per_sec": 0, 00:13:18.059 "rw_mbytes_per_sec": 0, 00:13:18.059 "r_mbytes_per_sec": 0, 00:13:18.059 "w_mbytes_per_sec": 0 00:13:18.059 }, 00:13:18.059 "claimed": false, 00:13:18.059 "zoned": false, 00:13:18.059 "supported_io_types": { 00:13:18.059 "read": true, 00:13:18.059 "write": true, 00:13:18.059 "unmap": false, 00:13:18.059 "flush": false, 00:13:18.059 "reset": true, 00:13:18.059 "nvme_admin": false, 00:13:18.059 "nvme_io": false, 00:13:18.059 "nvme_io_md": false, 00:13:18.059 "write_zeroes": true, 00:13:18.059 "zcopy": false, 00:13:18.059 "get_zone_info": false, 00:13:18.059 "zone_management": false, 00:13:18.059 "zone_append": false, 00:13:18.059 "compare": false, 00:13:18.059 "compare_and_write": false, 00:13:18.059 "abort": false, 00:13:18.059 "seek_hole": false, 00:13:18.059 "seek_data": false, 00:13:18.059 "copy": false, 00:13:18.059 "nvme_iov_md": false 00:13:18.059 }, 00:13:18.059 "driver_specific": { 00:13:18.059 "raid": { 00:13:18.059 "uuid": "44a4987e-334b-473a-872d-9cb635c70924", 00:13:18.059 "strip_size_kb": 64, 00:13:18.059 "state": "online", 00:13:18.059 "raid_level": "raid5f", 00:13:18.059 "superblock": true, 00:13:18.059 "num_base_bdevs": 3, 00:13:18.059 "num_base_bdevs_discovered": 3, 00:13:18.059 "num_base_bdevs_operational": 3, 00:13:18.059 "base_bdevs_list": [ 00:13:18.059 { 00:13:18.059 "name": "BaseBdev1", 00:13:18.059 "uuid": "93e7fe6b-da5d-453c-9684-cf0c67f45d0d", 00:13:18.059 "is_configured": true, 00:13:18.059 "data_offset": 2048, 00:13:18.059 "data_size": 63488 00:13:18.059 }, 00:13:18.059 { 00:13:18.059 "name": "BaseBdev2", 00:13:18.059 "uuid": "1bba5717-ff5f-417b-8f42-9f98e56ec9c3", 00:13:18.059 "is_configured": true, 00:13:18.059 "data_offset": 2048, 00:13:18.059 "data_size": 63488 00:13:18.059 }, 00:13:18.059 { 00:13:18.059 "name": "BaseBdev3", 00:13:18.059 "uuid": "24e7ed08-05aa-463a-989d-af6876f034bc", 00:13:18.059 "is_configured": true, 00:13:18.059 "data_offset": 2048, 00:13:18.059 "data_size": 63488 00:13:18.059 } 00:13:18.059 ] 00:13:18.059 } 00:13:18.059 } 00:13:18.059 }' 00:13:18.059 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:18.320 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:18.320 BaseBdev2 00:13:18.320 BaseBdev3' 00:13:18.320 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.320 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:18.320 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.320 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:18.320 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.320 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.320 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.320 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.320 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.320 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.320 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.320 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:18.320 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.320 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.320 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.320 16:26:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.320 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.320 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.320 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.320 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:18.320 16:26:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.320 [2024-11-28 16:26:10.049765] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.320 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.580 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.580 "name": "Existed_Raid", 00:13:18.580 "uuid": "44a4987e-334b-473a-872d-9cb635c70924", 00:13:18.580 "strip_size_kb": 64, 00:13:18.580 "state": "online", 00:13:18.580 "raid_level": "raid5f", 00:13:18.580 "superblock": true, 00:13:18.580 "num_base_bdevs": 3, 00:13:18.580 "num_base_bdevs_discovered": 2, 00:13:18.580 "num_base_bdevs_operational": 2, 00:13:18.580 "base_bdevs_list": [ 00:13:18.580 { 00:13:18.580 "name": null, 00:13:18.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.580 "is_configured": false, 00:13:18.580 "data_offset": 0, 00:13:18.580 "data_size": 63488 00:13:18.580 }, 00:13:18.580 { 00:13:18.580 "name": "BaseBdev2", 00:13:18.580 "uuid": "1bba5717-ff5f-417b-8f42-9f98e56ec9c3", 00:13:18.580 "is_configured": true, 00:13:18.580 "data_offset": 2048, 00:13:18.580 "data_size": 63488 00:13:18.580 }, 00:13:18.580 { 00:13:18.580 "name": "BaseBdev3", 00:13:18.580 "uuid": "24e7ed08-05aa-463a-989d-af6876f034bc", 00:13:18.580 "is_configured": true, 00:13:18.580 "data_offset": 2048, 00:13:18.580 "data_size": 63488 00:13:18.580 } 00:13:18.580 ] 00:13:18.580 }' 00:13:18.580 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.580 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.841 [2024-11-28 16:26:10.536313] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:18.841 [2024-11-28 16:26:10.536462] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:18.841 [2024-11-28 16:26:10.547603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.841 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.841 [2024-11-28 16:26:10.603543] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:18.841 [2024-11-28 16:26:10.603594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.102 BaseBdev2 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.102 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.102 [ 00:13:19.102 { 00:13:19.102 "name": "BaseBdev2", 00:13:19.102 "aliases": [ 00:13:19.102 "9e64284c-d23f-4c5e-ae41-ea08f8399ee7" 00:13:19.102 ], 00:13:19.102 "product_name": "Malloc disk", 00:13:19.102 "block_size": 512, 00:13:19.102 "num_blocks": 65536, 00:13:19.102 "uuid": "9e64284c-d23f-4c5e-ae41-ea08f8399ee7", 00:13:19.102 "assigned_rate_limits": { 00:13:19.102 "rw_ios_per_sec": 0, 00:13:19.102 "rw_mbytes_per_sec": 0, 00:13:19.102 "r_mbytes_per_sec": 0, 00:13:19.102 "w_mbytes_per_sec": 0 00:13:19.102 }, 00:13:19.102 "claimed": false, 00:13:19.102 "zoned": false, 00:13:19.102 "supported_io_types": { 00:13:19.102 "read": true, 00:13:19.102 "write": true, 00:13:19.102 "unmap": true, 00:13:19.102 "flush": true, 00:13:19.102 "reset": true, 00:13:19.102 "nvme_admin": false, 00:13:19.102 "nvme_io": false, 00:13:19.102 "nvme_io_md": false, 00:13:19.102 "write_zeroes": true, 00:13:19.103 "zcopy": true, 00:13:19.103 "get_zone_info": false, 00:13:19.103 "zone_management": false, 00:13:19.103 "zone_append": false, 00:13:19.103 "compare": false, 00:13:19.103 "compare_and_write": false, 00:13:19.103 "abort": true, 00:13:19.103 "seek_hole": false, 00:13:19.103 "seek_data": false, 00:13:19.103 "copy": true, 00:13:19.103 "nvme_iov_md": false 00:13:19.103 }, 00:13:19.103 "memory_domains": [ 00:13:19.103 { 00:13:19.103 "dma_device_id": "system", 00:13:19.103 "dma_device_type": 1 00:13:19.103 }, 00:13:19.103 { 00:13:19.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.103 "dma_device_type": 2 00:13:19.103 } 00:13:19.103 ], 00:13:19.103 "driver_specific": {} 00:13:19.103 } 00:13:19.103 ] 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.103 BaseBdev3 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.103 [ 00:13:19.103 { 00:13:19.103 "name": "BaseBdev3", 00:13:19.103 "aliases": [ 00:13:19.103 "a52882e8-a738-4b0e-92d9-4289c73e6539" 00:13:19.103 ], 00:13:19.103 "product_name": "Malloc disk", 00:13:19.103 "block_size": 512, 00:13:19.103 "num_blocks": 65536, 00:13:19.103 "uuid": "a52882e8-a738-4b0e-92d9-4289c73e6539", 00:13:19.103 "assigned_rate_limits": { 00:13:19.103 "rw_ios_per_sec": 0, 00:13:19.103 "rw_mbytes_per_sec": 0, 00:13:19.103 "r_mbytes_per_sec": 0, 00:13:19.103 "w_mbytes_per_sec": 0 00:13:19.103 }, 00:13:19.103 "claimed": false, 00:13:19.103 "zoned": false, 00:13:19.103 "supported_io_types": { 00:13:19.103 "read": true, 00:13:19.103 "write": true, 00:13:19.103 "unmap": true, 00:13:19.103 "flush": true, 00:13:19.103 "reset": true, 00:13:19.103 "nvme_admin": false, 00:13:19.103 "nvme_io": false, 00:13:19.103 "nvme_io_md": false, 00:13:19.103 "write_zeroes": true, 00:13:19.103 "zcopy": true, 00:13:19.103 "get_zone_info": false, 00:13:19.103 "zone_management": false, 00:13:19.103 "zone_append": false, 00:13:19.103 "compare": false, 00:13:19.103 "compare_and_write": false, 00:13:19.103 "abort": true, 00:13:19.103 "seek_hole": false, 00:13:19.103 "seek_data": false, 00:13:19.103 "copy": true, 00:13:19.103 "nvme_iov_md": false 00:13:19.103 }, 00:13:19.103 "memory_domains": [ 00:13:19.103 { 00:13:19.103 "dma_device_id": "system", 00:13:19.103 "dma_device_type": 1 00:13:19.103 }, 00:13:19.103 { 00:13:19.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.103 "dma_device_type": 2 00:13:19.103 } 00:13:19.103 ], 00:13:19.103 "driver_specific": {} 00:13:19.103 } 00:13:19.103 ] 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.103 [2024-11-28 16:26:10.762258] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:19.103 [2024-11-28 16:26:10.762301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:19.103 [2024-11-28 16:26:10.762320] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:19.103 [2024-11-28 16:26:10.764131] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.103 "name": "Existed_Raid", 00:13:19.103 "uuid": "822f3edd-2d38-4d30-ac6c-8691d2c7b398", 00:13:19.103 "strip_size_kb": 64, 00:13:19.103 "state": "configuring", 00:13:19.103 "raid_level": "raid5f", 00:13:19.103 "superblock": true, 00:13:19.103 "num_base_bdevs": 3, 00:13:19.103 "num_base_bdevs_discovered": 2, 00:13:19.103 "num_base_bdevs_operational": 3, 00:13:19.103 "base_bdevs_list": [ 00:13:19.103 { 00:13:19.103 "name": "BaseBdev1", 00:13:19.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.103 "is_configured": false, 00:13:19.103 "data_offset": 0, 00:13:19.103 "data_size": 0 00:13:19.103 }, 00:13:19.103 { 00:13:19.103 "name": "BaseBdev2", 00:13:19.103 "uuid": "9e64284c-d23f-4c5e-ae41-ea08f8399ee7", 00:13:19.103 "is_configured": true, 00:13:19.103 "data_offset": 2048, 00:13:19.103 "data_size": 63488 00:13:19.103 }, 00:13:19.103 { 00:13:19.103 "name": "BaseBdev3", 00:13:19.103 "uuid": "a52882e8-a738-4b0e-92d9-4289c73e6539", 00:13:19.103 "is_configured": true, 00:13:19.103 "data_offset": 2048, 00:13:19.103 "data_size": 63488 00:13:19.103 } 00:13:19.103 ] 00:13:19.103 }' 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.103 16:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.674 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:19.674 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.674 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.674 [2024-11-28 16:26:11.221448] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:19.674 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.674 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:19.674 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.674 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.674 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:19.674 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.674 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:19.674 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.674 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.674 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.674 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.674 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.674 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.674 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.674 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.674 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.674 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.674 "name": "Existed_Raid", 00:13:19.674 "uuid": "822f3edd-2d38-4d30-ac6c-8691d2c7b398", 00:13:19.674 "strip_size_kb": 64, 00:13:19.674 "state": "configuring", 00:13:19.674 "raid_level": "raid5f", 00:13:19.674 "superblock": true, 00:13:19.674 "num_base_bdevs": 3, 00:13:19.674 "num_base_bdevs_discovered": 1, 00:13:19.674 "num_base_bdevs_operational": 3, 00:13:19.674 "base_bdevs_list": [ 00:13:19.674 { 00:13:19.674 "name": "BaseBdev1", 00:13:19.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.674 "is_configured": false, 00:13:19.674 "data_offset": 0, 00:13:19.674 "data_size": 0 00:13:19.674 }, 00:13:19.674 { 00:13:19.674 "name": null, 00:13:19.674 "uuid": "9e64284c-d23f-4c5e-ae41-ea08f8399ee7", 00:13:19.674 "is_configured": false, 00:13:19.674 "data_offset": 0, 00:13:19.674 "data_size": 63488 00:13:19.674 }, 00:13:19.674 { 00:13:19.674 "name": "BaseBdev3", 00:13:19.674 "uuid": "a52882e8-a738-4b0e-92d9-4289c73e6539", 00:13:19.674 "is_configured": true, 00:13:19.674 "data_offset": 2048, 00:13:19.674 "data_size": 63488 00:13:19.674 } 00:13:19.674 ] 00:13:19.674 }' 00:13:19.674 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.674 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.936 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.936 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:19.936 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.936 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.936 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.936 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:19.936 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:19.936 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.936 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.936 [2024-11-28 16:26:11.683665] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:19.936 BaseBdev1 00:13:19.936 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.936 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:19.936 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:19.936 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:19.936 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:19.936 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:19.936 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:19.936 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:19.936 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.936 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.936 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.936 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:19.936 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.936 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.197 [ 00:13:20.197 { 00:13:20.197 "name": "BaseBdev1", 00:13:20.197 "aliases": [ 00:13:20.197 "04503930-85cd-42c9-8365-0428349dda4a" 00:13:20.197 ], 00:13:20.197 "product_name": "Malloc disk", 00:13:20.197 "block_size": 512, 00:13:20.197 "num_blocks": 65536, 00:13:20.197 "uuid": "04503930-85cd-42c9-8365-0428349dda4a", 00:13:20.197 "assigned_rate_limits": { 00:13:20.197 "rw_ios_per_sec": 0, 00:13:20.197 "rw_mbytes_per_sec": 0, 00:13:20.197 "r_mbytes_per_sec": 0, 00:13:20.197 "w_mbytes_per_sec": 0 00:13:20.197 }, 00:13:20.197 "claimed": true, 00:13:20.197 "claim_type": "exclusive_write", 00:13:20.197 "zoned": false, 00:13:20.197 "supported_io_types": { 00:13:20.197 "read": true, 00:13:20.197 "write": true, 00:13:20.197 "unmap": true, 00:13:20.197 "flush": true, 00:13:20.197 "reset": true, 00:13:20.197 "nvme_admin": false, 00:13:20.197 "nvme_io": false, 00:13:20.197 "nvme_io_md": false, 00:13:20.197 "write_zeroes": true, 00:13:20.197 "zcopy": true, 00:13:20.197 "get_zone_info": false, 00:13:20.197 "zone_management": false, 00:13:20.197 "zone_append": false, 00:13:20.197 "compare": false, 00:13:20.197 "compare_and_write": false, 00:13:20.197 "abort": true, 00:13:20.197 "seek_hole": false, 00:13:20.197 "seek_data": false, 00:13:20.197 "copy": true, 00:13:20.197 "nvme_iov_md": false 00:13:20.197 }, 00:13:20.197 "memory_domains": [ 00:13:20.197 { 00:13:20.197 "dma_device_id": "system", 00:13:20.197 "dma_device_type": 1 00:13:20.197 }, 00:13:20.197 { 00:13:20.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.197 "dma_device_type": 2 00:13:20.197 } 00:13:20.197 ], 00:13:20.197 "driver_specific": {} 00:13:20.197 } 00:13:20.197 ] 00:13:20.197 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.197 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:20.197 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:20.197 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.197 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.197 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:20.197 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.197 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.197 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.197 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.197 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.197 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.197 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.197 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.197 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.197 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.197 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.197 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.197 "name": "Existed_Raid", 00:13:20.197 "uuid": "822f3edd-2d38-4d30-ac6c-8691d2c7b398", 00:13:20.197 "strip_size_kb": 64, 00:13:20.197 "state": "configuring", 00:13:20.197 "raid_level": "raid5f", 00:13:20.197 "superblock": true, 00:13:20.197 "num_base_bdevs": 3, 00:13:20.197 "num_base_bdevs_discovered": 2, 00:13:20.197 "num_base_bdevs_operational": 3, 00:13:20.197 "base_bdevs_list": [ 00:13:20.197 { 00:13:20.197 "name": "BaseBdev1", 00:13:20.197 "uuid": "04503930-85cd-42c9-8365-0428349dda4a", 00:13:20.197 "is_configured": true, 00:13:20.197 "data_offset": 2048, 00:13:20.197 "data_size": 63488 00:13:20.197 }, 00:13:20.197 { 00:13:20.197 "name": null, 00:13:20.197 "uuid": "9e64284c-d23f-4c5e-ae41-ea08f8399ee7", 00:13:20.197 "is_configured": false, 00:13:20.197 "data_offset": 0, 00:13:20.197 "data_size": 63488 00:13:20.197 }, 00:13:20.197 { 00:13:20.197 "name": "BaseBdev3", 00:13:20.197 "uuid": "a52882e8-a738-4b0e-92d9-4289c73e6539", 00:13:20.197 "is_configured": true, 00:13:20.197 "data_offset": 2048, 00:13:20.197 "data_size": 63488 00:13:20.197 } 00:13:20.197 ] 00:13:20.197 }' 00:13:20.197 16:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.197 16:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.457 [2024-11-28 16:26:12.182858] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.457 16:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.718 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.718 "name": "Existed_Raid", 00:13:20.718 "uuid": "822f3edd-2d38-4d30-ac6c-8691d2c7b398", 00:13:20.718 "strip_size_kb": 64, 00:13:20.718 "state": "configuring", 00:13:20.718 "raid_level": "raid5f", 00:13:20.718 "superblock": true, 00:13:20.718 "num_base_bdevs": 3, 00:13:20.718 "num_base_bdevs_discovered": 1, 00:13:20.718 "num_base_bdevs_operational": 3, 00:13:20.718 "base_bdevs_list": [ 00:13:20.718 { 00:13:20.718 "name": "BaseBdev1", 00:13:20.718 "uuid": "04503930-85cd-42c9-8365-0428349dda4a", 00:13:20.718 "is_configured": true, 00:13:20.718 "data_offset": 2048, 00:13:20.718 "data_size": 63488 00:13:20.718 }, 00:13:20.718 { 00:13:20.718 "name": null, 00:13:20.718 "uuid": "9e64284c-d23f-4c5e-ae41-ea08f8399ee7", 00:13:20.718 "is_configured": false, 00:13:20.718 "data_offset": 0, 00:13:20.718 "data_size": 63488 00:13:20.718 }, 00:13:20.718 { 00:13:20.718 "name": null, 00:13:20.718 "uuid": "a52882e8-a738-4b0e-92d9-4289c73e6539", 00:13:20.718 "is_configured": false, 00:13:20.718 "data_offset": 0, 00:13:20.718 "data_size": 63488 00:13:20.718 } 00:13:20.718 ] 00:13:20.718 }' 00:13:20.718 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.718 16:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.978 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.978 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:20.978 16:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.978 16:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.978 16:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.978 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:20.978 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:20.978 16:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.979 16:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.979 [2024-11-28 16:26:12.658051] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:20.979 16:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.979 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:20.979 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.979 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.979 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:20.979 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.979 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.979 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.979 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.979 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.979 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.979 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.979 16:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.979 16:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.979 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.979 16:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.979 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.979 "name": "Existed_Raid", 00:13:20.979 "uuid": "822f3edd-2d38-4d30-ac6c-8691d2c7b398", 00:13:20.979 "strip_size_kb": 64, 00:13:20.979 "state": "configuring", 00:13:20.979 "raid_level": "raid5f", 00:13:20.979 "superblock": true, 00:13:20.979 "num_base_bdevs": 3, 00:13:20.979 "num_base_bdevs_discovered": 2, 00:13:20.979 "num_base_bdevs_operational": 3, 00:13:20.979 "base_bdevs_list": [ 00:13:20.979 { 00:13:20.979 "name": "BaseBdev1", 00:13:20.979 "uuid": "04503930-85cd-42c9-8365-0428349dda4a", 00:13:20.979 "is_configured": true, 00:13:20.979 "data_offset": 2048, 00:13:20.979 "data_size": 63488 00:13:20.979 }, 00:13:20.979 { 00:13:20.979 "name": null, 00:13:20.979 "uuid": "9e64284c-d23f-4c5e-ae41-ea08f8399ee7", 00:13:20.979 "is_configured": false, 00:13:20.979 "data_offset": 0, 00:13:20.979 "data_size": 63488 00:13:20.979 }, 00:13:20.979 { 00:13:20.979 "name": "BaseBdev3", 00:13:20.979 "uuid": "a52882e8-a738-4b0e-92d9-4289c73e6539", 00:13:20.979 "is_configured": true, 00:13:20.979 "data_offset": 2048, 00:13:20.979 "data_size": 63488 00:13:20.979 } 00:13:20.979 ] 00:13:20.979 }' 00:13:20.979 16:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.979 16:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.551 [2024-11-28 16:26:13.165172] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.551 "name": "Existed_Raid", 00:13:21.551 "uuid": "822f3edd-2d38-4d30-ac6c-8691d2c7b398", 00:13:21.551 "strip_size_kb": 64, 00:13:21.551 "state": "configuring", 00:13:21.551 "raid_level": "raid5f", 00:13:21.551 "superblock": true, 00:13:21.551 "num_base_bdevs": 3, 00:13:21.551 "num_base_bdevs_discovered": 1, 00:13:21.551 "num_base_bdevs_operational": 3, 00:13:21.551 "base_bdevs_list": [ 00:13:21.551 { 00:13:21.551 "name": null, 00:13:21.551 "uuid": "04503930-85cd-42c9-8365-0428349dda4a", 00:13:21.551 "is_configured": false, 00:13:21.551 "data_offset": 0, 00:13:21.551 "data_size": 63488 00:13:21.551 }, 00:13:21.551 { 00:13:21.551 "name": null, 00:13:21.551 "uuid": "9e64284c-d23f-4c5e-ae41-ea08f8399ee7", 00:13:21.551 "is_configured": false, 00:13:21.551 "data_offset": 0, 00:13:21.551 "data_size": 63488 00:13:21.551 }, 00:13:21.551 { 00:13:21.551 "name": "BaseBdev3", 00:13:21.551 "uuid": "a52882e8-a738-4b0e-92d9-4289c73e6539", 00:13:21.551 "is_configured": true, 00:13:21.551 "data_offset": 2048, 00:13:21.551 "data_size": 63488 00:13:21.551 } 00:13:21.551 ] 00:13:21.551 }' 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.551 16:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.811 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.811 16:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.811 16:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.811 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:22.071 16:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.071 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:22.071 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:22.071 16:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.071 16:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.071 [2024-11-28 16:26:13.626841] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:22.071 16:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.071 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:22.071 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.071 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.071 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:22.071 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.071 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.071 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.071 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.071 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.071 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.071 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.071 16:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.071 16:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.071 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.071 16:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.071 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.071 "name": "Existed_Raid", 00:13:22.071 "uuid": "822f3edd-2d38-4d30-ac6c-8691d2c7b398", 00:13:22.071 "strip_size_kb": 64, 00:13:22.071 "state": "configuring", 00:13:22.071 "raid_level": "raid5f", 00:13:22.071 "superblock": true, 00:13:22.071 "num_base_bdevs": 3, 00:13:22.071 "num_base_bdevs_discovered": 2, 00:13:22.071 "num_base_bdevs_operational": 3, 00:13:22.071 "base_bdevs_list": [ 00:13:22.071 { 00:13:22.071 "name": null, 00:13:22.071 "uuid": "04503930-85cd-42c9-8365-0428349dda4a", 00:13:22.071 "is_configured": false, 00:13:22.072 "data_offset": 0, 00:13:22.072 "data_size": 63488 00:13:22.072 }, 00:13:22.072 { 00:13:22.072 "name": "BaseBdev2", 00:13:22.072 "uuid": "9e64284c-d23f-4c5e-ae41-ea08f8399ee7", 00:13:22.072 "is_configured": true, 00:13:22.072 "data_offset": 2048, 00:13:22.072 "data_size": 63488 00:13:22.072 }, 00:13:22.072 { 00:13:22.072 "name": "BaseBdev3", 00:13:22.072 "uuid": "a52882e8-a738-4b0e-92d9-4289c73e6539", 00:13:22.072 "is_configured": true, 00:13:22.072 "data_offset": 2048, 00:13:22.072 "data_size": 63488 00:13:22.072 } 00:13:22.072 ] 00:13:22.072 }' 00:13:22.072 16:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.072 16:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.331 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.331 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.331 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.331 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:22.331 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.331 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:22.331 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:22.331 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.331 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.332 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.332 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.332 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 04503930-85cd-42c9-8365-0428349dda4a 00:13:22.332 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.332 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.592 [2024-11-28 16:26:14.104772] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:22.592 [2024-11-28 16:26:14.104949] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:22.592 [2024-11-28 16:26:14.104966] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:22.592 [2024-11-28 16:26:14.105222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:22.592 NewBaseBdev 00:13:22.592 [2024-11-28 16:26:14.105687] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:22.592 [2024-11-28 16:26:14.105708] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:13:22.592 [2024-11-28 16:26:14.105810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.592 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.592 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:22.592 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:22.592 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:22.592 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:22.592 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:22.592 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:22.592 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:22.592 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.592 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.592 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.592 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:22.592 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.592 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.592 [ 00:13:22.592 { 00:13:22.592 "name": "NewBaseBdev", 00:13:22.592 "aliases": [ 00:13:22.592 "04503930-85cd-42c9-8365-0428349dda4a" 00:13:22.592 ], 00:13:22.592 "product_name": "Malloc disk", 00:13:22.592 "block_size": 512, 00:13:22.592 "num_blocks": 65536, 00:13:22.592 "uuid": "04503930-85cd-42c9-8365-0428349dda4a", 00:13:22.592 "assigned_rate_limits": { 00:13:22.592 "rw_ios_per_sec": 0, 00:13:22.592 "rw_mbytes_per_sec": 0, 00:13:22.592 "r_mbytes_per_sec": 0, 00:13:22.592 "w_mbytes_per_sec": 0 00:13:22.592 }, 00:13:22.592 "claimed": true, 00:13:22.592 "claim_type": "exclusive_write", 00:13:22.592 "zoned": false, 00:13:22.592 "supported_io_types": { 00:13:22.592 "read": true, 00:13:22.592 "write": true, 00:13:22.592 "unmap": true, 00:13:22.592 "flush": true, 00:13:22.592 "reset": true, 00:13:22.592 "nvme_admin": false, 00:13:22.592 "nvme_io": false, 00:13:22.592 "nvme_io_md": false, 00:13:22.592 "write_zeroes": true, 00:13:22.592 "zcopy": true, 00:13:22.592 "get_zone_info": false, 00:13:22.592 "zone_management": false, 00:13:22.592 "zone_append": false, 00:13:22.592 "compare": false, 00:13:22.592 "compare_and_write": false, 00:13:22.592 "abort": true, 00:13:22.592 "seek_hole": false, 00:13:22.592 "seek_data": false, 00:13:22.592 "copy": true, 00:13:22.592 "nvme_iov_md": false 00:13:22.592 }, 00:13:22.592 "memory_domains": [ 00:13:22.592 { 00:13:22.592 "dma_device_id": "system", 00:13:22.592 "dma_device_type": 1 00:13:22.592 }, 00:13:22.592 { 00:13:22.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.593 "dma_device_type": 2 00:13:22.593 } 00:13:22.593 ], 00:13:22.593 "driver_specific": {} 00:13:22.593 } 00:13:22.593 ] 00:13:22.593 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.593 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:22.593 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:22.593 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.593 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.593 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:22.593 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.593 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.593 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.593 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.593 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.593 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.593 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.593 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.593 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.593 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.593 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.593 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.593 "name": "Existed_Raid", 00:13:22.593 "uuid": "822f3edd-2d38-4d30-ac6c-8691d2c7b398", 00:13:22.593 "strip_size_kb": 64, 00:13:22.593 "state": "online", 00:13:22.593 "raid_level": "raid5f", 00:13:22.593 "superblock": true, 00:13:22.593 "num_base_bdevs": 3, 00:13:22.593 "num_base_bdevs_discovered": 3, 00:13:22.593 "num_base_bdevs_operational": 3, 00:13:22.593 "base_bdevs_list": [ 00:13:22.593 { 00:13:22.593 "name": "NewBaseBdev", 00:13:22.593 "uuid": "04503930-85cd-42c9-8365-0428349dda4a", 00:13:22.593 "is_configured": true, 00:13:22.593 "data_offset": 2048, 00:13:22.593 "data_size": 63488 00:13:22.593 }, 00:13:22.593 { 00:13:22.593 "name": "BaseBdev2", 00:13:22.593 "uuid": "9e64284c-d23f-4c5e-ae41-ea08f8399ee7", 00:13:22.593 "is_configured": true, 00:13:22.593 "data_offset": 2048, 00:13:22.593 "data_size": 63488 00:13:22.593 }, 00:13:22.593 { 00:13:22.593 "name": "BaseBdev3", 00:13:22.593 "uuid": "a52882e8-a738-4b0e-92d9-4289c73e6539", 00:13:22.593 "is_configured": true, 00:13:22.593 "data_offset": 2048, 00:13:22.593 "data_size": 63488 00:13:22.593 } 00:13:22.593 ] 00:13:22.593 }' 00:13:22.593 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.593 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.853 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:22.853 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:22.853 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:22.853 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:22.853 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:22.853 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:22.853 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:22.853 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.853 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:22.853 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.853 [2024-11-28 16:26:14.612154] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:23.114 "name": "Existed_Raid", 00:13:23.114 "aliases": [ 00:13:23.114 "822f3edd-2d38-4d30-ac6c-8691d2c7b398" 00:13:23.114 ], 00:13:23.114 "product_name": "Raid Volume", 00:13:23.114 "block_size": 512, 00:13:23.114 "num_blocks": 126976, 00:13:23.114 "uuid": "822f3edd-2d38-4d30-ac6c-8691d2c7b398", 00:13:23.114 "assigned_rate_limits": { 00:13:23.114 "rw_ios_per_sec": 0, 00:13:23.114 "rw_mbytes_per_sec": 0, 00:13:23.114 "r_mbytes_per_sec": 0, 00:13:23.114 "w_mbytes_per_sec": 0 00:13:23.114 }, 00:13:23.114 "claimed": false, 00:13:23.114 "zoned": false, 00:13:23.114 "supported_io_types": { 00:13:23.114 "read": true, 00:13:23.114 "write": true, 00:13:23.114 "unmap": false, 00:13:23.114 "flush": false, 00:13:23.114 "reset": true, 00:13:23.114 "nvme_admin": false, 00:13:23.114 "nvme_io": false, 00:13:23.114 "nvme_io_md": false, 00:13:23.114 "write_zeroes": true, 00:13:23.114 "zcopy": false, 00:13:23.114 "get_zone_info": false, 00:13:23.114 "zone_management": false, 00:13:23.114 "zone_append": false, 00:13:23.114 "compare": false, 00:13:23.114 "compare_and_write": false, 00:13:23.114 "abort": false, 00:13:23.114 "seek_hole": false, 00:13:23.114 "seek_data": false, 00:13:23.114 "copy": false, 00:13:23.114 "nvme_iov_md": false 00:13:23.114 }, 00:13:23.114 "driver_specific": { 00:13:23.114 "raid": { 00:13:23.114 "uuid": "822f3edd-2d38-4d30-ac6c-8691d2c7b398", 00:13:23.114 "strip_size_kb": 64, 00:13:23.114 "state": "online", 00:13:23.114 "raid_level": "raid5f", 00:13:23.114 "superblock": true, 00:13:23.114 "num_base_bdevs": 3, 00:13:23.114 "num_base_bdevs_discovered": 3, 00:13:23.114 "num_base_bdevs_operational": 3, 00:13:23.114 "base_bdevs_list": [ 00:13:23.114 { 00:13:23.114 "name": "NewBaseBdev", 00:13:23.114 "uuid": "04503930-85cd-42c9-8365-0428349dda4a", 00:13:23.114 "is_configured": true, 00:13:23.114 "data_offset": 2048, 00:13:23.114 "data_size": 63488 00:13:23.114 }, 00:13:23.114 { 00:13:23.114 "name": "BaseBdev2", 00:13:23.114 "uuid": "9e64284c-d23f-4c5e-ae41-ea08f8399ee7", 00:13:23.114 "is_configured": true, 00:13:23.114 "data_offset": 2048, 00:13:23.114 "data_size": 63488 00:13:23.114 }, 00:13:23.114 { 00:13:23.114 "name": "BaseBdev3", 00:13:23.114 "uuid": "a52882e8-a738-4b0e-92d9-4289c73e6539", 00:13:23.114 "is_configured": true, 00:13:23.114 "data_offset": 2048, 00:13:23.114 "data_size": 63488 00:13:23.114 } 00:13:23.114 ] 00:13:23.114 } 00:13:23.114 } 00:13:23.114 }' 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:23.114 BaseBdev2 00:13:23.114 BaseBdev3' 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.114 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.114 [2024-11-28 16:26:14.875635] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:23.114 [2024-11-28 16:26:14.875662] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:23.114 [2024-11-28 16:26:14.875723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:23.115 [2024-11-28 16:26:14.875972] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:23.115 [2024-11-28 16:26:14.875990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:13:23.115 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.115 16:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 91064 00:13:23.115 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 91064 ']' 00:13:23.115 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 91064 00:13:23.375 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:23.375 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:23.375 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91064 00:13:23.375 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:23.375 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:23.375 killing process with pid 91064 00:13:23.375 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91064' 00:13:23.375 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 91064 00:13:23.375 [2024-11-28 16:26:14.924770] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:23.375 16:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 91064 00:13:23.375 [2024-11-28 16:26:14.955339] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:23.635 16:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:23.635 00:13:23.635 real 0m8.819s 00:13:23.635 user 0m14.976s 00:13:23.635 sys 0m1.803s 00:13:23.635 16:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:23.635 16:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.635 ************************************ 00:13:23.635 END TEST raid5f_state_function_test_sb 00:13:23.635 ************************************ 00:13:23.635 16:26:15 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:13:23.635 16:26:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:23.635 16:26:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:23.635 16:26:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:23.635 ************************************ 00:13:23.635 START TEST raid5f_superblock_test 00:13:23.635 ************************************ 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=91662 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 91662 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 91662 ']' 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:23.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:23.635 16:26:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.635 [2024-11-28 16:26:15.379396] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:23.635 [2024-11-28 16:26:15.379541] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91662 ] 00:13:23.895 [2024-11-28 16:26:15.534608] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.895 [2024-11-28 16:26:15.579000] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.895 [2024-11-28 16:26:15.622201] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.895 [2024-11-28 16:26:15.622237] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.465 malloc1 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.465 [2024-11-28 16:26:16.213050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:24.465 [2024-11-28 16:26:16.213144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.465 [2024-11-28 16:26:16.213169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:24.465 [2024-11-28 16:26:16.213190] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.465 [2024-11-28 16:26:16.215265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.465 [2024-11-28 16:26:16.215304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:24.465 pt1 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.465 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.726 malloc2 00:13:24.726 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.726 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:24.726 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.726 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.726 [2024-11-28 16:26:16.256697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:24.726 [2024-11-28 16:26:16.256792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.726 [2024-11-28 16:26:16.256826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:24.726 [2024-11-28 16:26:16.256870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.726 [2024-11-28 16:26:16.261375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.726 [2024-11-28 16:26:16.261445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:24.726 pt2 00:13:24.726 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.726 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:24.726 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:24.726 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:24.726 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:24.726 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:24.726 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:24.726 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:24.726 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:24.726 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:24.726 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.726 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.726 malloc3 00:13:24.726 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.726 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:24.726 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.726 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.726 [2024-11-28 16:26:16.287458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:24.726 [2024-11-28 16:26:16.287502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.726 [2024-11-28 16:26:16.287518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:24.726 [2024-11-28 16:26:16.287528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.727 [2024-11-28 16:26:16.289576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.727 [2024-11-28 16:26:16.289610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:24.727 pt3 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.727 [2024-11-28 16:26:16.299484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:24.727 [2024-11-28 16:26:16.301338] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:24.727 [2024-11-28 16:26:16.301403] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:24.727 [2024-11-28 16:26:16.301541] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:24.727 [2024-11-28 16:26:16.301555] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:24.727 [2024-11-28 16:26:16.301805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:24.727 [2024-11-28 16:26:16.302246] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:24.727 [2024-11-28 16:26:16.302270] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:24.727 [2024-11-28 16:26:16.302374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.727 "name": "raid_bdev1", 00:13:24.727 "uuid": "d2b221da-6508-4a36-8fb7-1096db4d3023", 00:13:24.727 "strip_size_kb": 64, 00:13:24.727 "state": "online", 00:13:24.727 "raid_level": "raid5f", 00:13:24.727 "superblock": true, 00:13:24.727 "num_base_bdevs": 3, 00:13:24.727 "num_base_bdevs_discovered": 3, 00:13:24.727 "num_base_bdevs_operational": 3, 00:13:24.727 "base_bdevs_list": [ 00:13:24.727 { 00:13:24.727 "name": "pt1", 00:13:24.727 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:24.727 "is_configured": true, 00:13:24.727 "data_offset": 2048, 00:13:24.727 "data_size": 63488 00:13:24.727 }, 00:13:24.727 { 00:13:24.727 "name": "pt2", 00:13:24.727 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:24.727 "is_configured": true, 00:13:24.727 "data_offset": 2048, 00:13:24.727 "data_size": 63488 00:13:24.727 }, 00:13:24.727 { 00:13:24.727 "name": "pt3", 00:13:24.727 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:24.727 "is_configured": true, 00:13:24.727 "data_offset": 2048, 00:13:24.727 "data_size": 63488 00:13:24.727 } 00:13:24.727 ] 00:13:24.727 }' 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.727 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.298 [2024-11-28 16:26:16.787060] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:25.298 "name": "raid_bdev1", 00:13:25.298 "aliases": [ 00:13:25.298 "d2b221da-6508-4a36-8fb7-1096db4d3023" 00:13:25.298 ], 00:13:25.298 "product_name": "Raid Volume", 00:13:25.298 "block_size": 512, 00:13:25.298 "num_blocks": 126976, 00:13:25.298 "uuid": "d2b221da-6508-4a36-8fb7-1096db4d3023", 00:13:25.298 "assigned_rate_limits": { 00:13:25.298 "rw_ios_per_sec": 0, 00:13:25.298 "rw_mbytes_per_sec": 0, 00:13:25.298 "r_mbytes_per_sec": 0, 00:13:25.298 "w_mbytes_per_sec": 0 00:13:25.298 }, 00:13:25.298 "claimed": false, 00:13:25.298 "zoned": false, 00:13:25.298 "supported_io_types": { 00:13:25.298 "read": true, 00:13:25.298 "write": true, 00:13:25.298 "unmap": false, 00:13:25.298 "flush": false, 00:13:25.298 "reset": true, 00:13:25.298 "nvme_admin": false, 00:13:25.298 "nvme_io": false, 00:13:25.298 "nvme_io_md": false, 00:13:25.298 "write_zeroes": true, 00:13:25.298 "zcopy": false, 00:13:25.298 "get_zone_info": false, 00:13:25.298 "zone_management": false, 00:13:25.298 "zone_append": false, 00:13:25.298 "compare": false, 00:13:25.298 "compare_and_write": false, 00:13:25.298 "abort": false, 00:13:25.298 "seek_hole": false, 00:13:25.298 "seek_data": false, 00:13:25.298 "copy": false, 00:13:25.298 "nvme_iov_md": false 00:13:25.298 }, 00:13:25.298 "driver_specific": { 00:13:25.298 "raid": { 00:13:25.298 "uuid": "d2b221da-6508-4a36-8fb7-1096db4d3023", 00:13:25.298 "strip_size_kb": 64, 00:13:25.298 "state": "online", 00:13:25.298 "raid_level": "raid5f", 00:13:25.298 "superblock": true, 00:13:25.298 "num_base_bdevs": 3, 00:13:25.298 "num_base_bdevs_discovered": 3, 00:13:25.298 "num_base_bdevs_operational": 3, 00:13:25.298 "base_bdevs_list": [ 00:13:25.298 { 00:13:25.298 "name": "pt1", 00:13:25.298 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:25.298 "is_configured": true, 00:13:25.298 "data_offset": 2048, 00:13:25.298 "data_size": 63488 00:13:25.298 }, 00:13:25.298 { 00:13:25.298 "name": "pt2", 00:13:25.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:25.298 "is_configured": true, 00:13:25.298 "data_offset": 2048, 00:13:25.298 "data_size": 63488 00:13:25.298 }, 00:13:25.298 { 00:13:25.298 "name": "pt3", 00:13:25.298 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:25.298 "is_configured": true, 00:13:25.298 "data_offset": 2048, 00:13:25.298 "data_size": 63488 00:13:25.298 } 00:13:25.298 ] 00:13:25.298 } 00:13:25.298 } 00:13:25.298 }' 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:25.298 pt2 00:13:25.298 pt3' 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.298 16:26:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.299 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.299 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.299 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.299 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:25.299 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.299 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.299 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.299 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.559 [2024-11-28 16:26:17.090489] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d2b221da-6508-4a36-8fb7-1096db4d3023 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d2b221da-6508-4a36-8fb7-1096db4d3023 ']' 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.559 [2024-11-28 16:26:17.118290] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:25.559 [2024-11-28 16:26:17.118314] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:25.559 [2024-11-28 16:26:17.118388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.559 [2024-11-28 16:26:17.118450] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:25.559 [2024-11-28 16:26:17.118469] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.559 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.559 [2024-11-28 16:26:17.266048] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:25.559 [2024-11-28 16:26:17.267897] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:25.559 [2024-11-28 16:26:17.267937] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:25.559 [2024-11-28 16:26:17.267979] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:25.559 [2024-11-28 16:26:17.268013] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:25.560 [2024-11-28 16:26:17.268031] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:25.560 [2024-11-28 16:26:17.268043] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:25.560 [2024-11-28 16:26:17.268054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:13:25.560 request: 00:13:25.560 { 00:13:25.560 "name": "raid_bdev1", 00:13:25.560 "raid_level": "raid5f", 00:13:25.560 "base_bdevs": [ 00:13:25.560 "malloc1", 00:13:25.560 "malloc2", 00:13:25.560 "malloc3" 00:13:25.560 ], 00:13:25.560 "strip_size_kb": 64, 00:13:25.560 "superblock": false, 00:13:25.560 "method": "bdev_raid_create", 00:13:25.560 "req_id": 1 00:13:25.560 } 00:13:25.560 Got JSON-RPC error response 00:13:25.560 response: 00:13:25.560 { 00:13:25.560 "code": -17, 00:13:25.560 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:25.560 } 00:13:25.560 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:25.560 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:25.560 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:25.560 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:25.560 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:25.560 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.560 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:25.560 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.560 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.560 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.560 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:25.560 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:25.560 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:25.560 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.560 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.819 [2024-11-28 16:26:17.329933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:25.819 [2024-11-28 16:26:17.329974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.819 [2024-11-28 16:26:17.329989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:25.819 [2024-11-28 16:26:17.329998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.819 [2024-11-28 16:26:17.332005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.819 [2024-11-28 16:26:17.332038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:25.819 [2024-11-28 16:26:17.332093] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:25.819 [2024-11-28 16:26:17.332133] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:25.819 pt1 00:13:25.819 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.819 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:25.819 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.819 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.819 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:25.819 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.819 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.819 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.819 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.819 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.819 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.819 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.819 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.819 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.819 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.819 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.819 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.819 "name": "raid_bdev1", 00:13:25.819 "uuid": "d2b221da-6508-4a36-8fb7-1096db4d3023", 00:13:25.819 "strip_size_kb": 64, 00:13:25.819 "state": "configuring", 00:13:25.819 "raid_level": "raid5f", 00:13:25.819 "superblock": true, 00:13:25.819 "num_base_bdevs": 3, 00:13:25.819 "num_base_bdevs_discovered": 1, 00:13:25.819 "num_base_bdevs_operational": 3, 00:13:25.819 "base_bdevs_list": [ 00:13:25.819 { 00:13:25.819 "name": "pt1", 00:13:25.819 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:25.819 "is_configured": true, 00:13:25.819 "data_offset": 2048, 00:13:25.819 "data_size": 63488 00:13:25.819 }, 00:13:25.819 { 00:13:25.819 "name": null, 00:13:25.819 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:25.819 "is_configured": false, 00:13:25.819 "data_offset": 2048, 00:13:25.819 "data_size": 63488 00:13:25.819 }, 00:13:25.819 { 00:13:25.819 "name": null, 00:13:25.819 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:25.819 "is_configured": false, 00:13:25.819 "data_offset": 2048, 00:13:25.819 "data_size": 63488 00:13:25.819 } 00:13:25.819 ] 00:13:25.819 }' 00:13:25.819 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.819 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.079 [2024-11-28 16:26:17.749227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:26.079 [2024-11-28 16:26:17.749285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.079 [2024-11-28 16:26:17.749302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:26.079 [2024-11-28 16:26:17.749314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.079 [2024-11-28 16:26:17.749630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.079 [2024-11-28 16:26:17.749647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:26.079 [2024-11-28 16:26:17.749700] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:26.079 [2024-11-28 16:26:17.749722] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:26.079 pt2 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.079 [2024-11-28 16:26:17.761214] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.079 "name": "raid_bdev1", 00:13:26.079 "uuid": "d2b221da-6508-4a36-8fb7-1096db4d3023", 00:13:26.079 "strip_size_kb": 64, 00:13:26.079 "state": "configuring", 00:13:26.079 "raid_level": "raid5f", 00:13:26.079 "superblock": true, 00:13:26.079 "num_base_bdevs": 3, 00:13:26.079 "num_base_bdevs_discovered": 1, 00:13:26.079 "num_base_bdevs_operational": 3, 00:13:26.079 "base_bdevs_list": [ 00:13:26.079 { 00:13:26.079 "name": "pt1", 00:13:26.079 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:26.079 "is_configured": true, 00:13:26.079 "data_offset": 2048, 00:13:26.079 "data_size": 63488 00:13:26.079 }, 00:13:26.079 { 00:13:26.079 "name": null, 00:13:26.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:26.079 "is_configured": false, 00:13:26.079 "data_offset": 0, 00:13:26.079 "data_size": 63488 00:13:26.079 }, 00:13:26.079 { 00:13:26.079 "name": null, 00:13:26.079 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:26.079 "is_configured": false, 00:13:26.079 "data_offset": 2048, 00:13:26.079 "data_size": 63488 00:13:26.079 } 00:13:26.079 ] 00:13:26.079 }' 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.079 16:26:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.647 [2024-11-28 16:26:18.268321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:26.647 [2024-11-28 16:26:18.268367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.647 [2024-11-28 16:26:18.268384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:26.647 [2024-11-28 16:26:18.268392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.647 [2024-11-28 16:26:18.268706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.647 [2024-11-28 16:26:18.268720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:26.647 [2024-11-28 16:26:18.268775] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:26.647 [2024-11-28 16:26:18.268793] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:26.647 pt2 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.647 [2024-11-28 16:26:18.280294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:26.647 [2024-11-28 16:26:18.280331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.647 [2024-11-28 16:26:18.280347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:26.647 [2024-11-28 16:26:18.280354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.647 [2024-11-28 16:26:18.280672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.647 [2024-11-28 16:26:18.280691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:26.647 [2024-11-28 16:26:18.280740] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:26.647 [2024-11-28 16:26:18.280756] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:26.647 [2024-11-28 16:26:18.280870] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:26.647 [2024-11-28 16:26:18.280883] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:26.647 [2024-11-28 16:26:18.281087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:26.647 [2024-11-28 16:26:18.281480] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:26.647 [2024-11-28 16:26:18.281494] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:13:26.647 [2024-11-28 16:26:18.281583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.647 pt3 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.647 "name": "raid_bdev1", 00:13:26.647 "uuid": "d2b221da-6508-4a36-8fb7-1096db4d3023", 00:13:26.647 "strip_size_kb": 64, 00:13:26.647 "state": "online", 00:13:26.647 "raid_level": "raid5f", 00:13:26.647 "superblock": true, 00:13:26.647 "num_base_bdevs": 3, 00:13:26.647 "num_base_bdevs_discovered": 3, 00:13:26.647 "num_base_bdevs_operational": 3, 00:13:26.647 "base_bdevs_list": [ 00:13:26.647 { 00:13:26.647 "name": "pt1", 00:13:26.647 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:26.647 "is_configured": true, 00:13:26.647 "data_offset": 2048, 00:13:26.647 "data_size": 63488 00:13:26.647 }, 00:13:26.647 { 00:13:26.647 "name": "pt2", 00:13:26.647 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:26.647 "is_configured": true, 00:13:26.647 "data_offset": 2048, 00:13:26.647 "data_size": 63488 00:13:26.647 }, 00:13:26.647 { 00:13:26.647 "name": "pt3", 00:13:26.647 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:26.647 "is_configured": true, 00:13:26.647 "data_offset": 2048, 00:13:26.647 "data_size": 63488 00:13:26.647 } 00:13:26.647 ] 00:13:26.647 }' 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.647 16:26:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.215 [2024-11-28 16:26:18.731817] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:27.215 "name": "raid_bdev1", 00:13:27.215 "aliases": [ 00:13:27.215 "d2b221da-6508-4a36-8fb7-1096db4d3023" 00:13:27.215 ], 00:13:27.215 "product_name": "Raid Volume", 00:13:27.215 "block_size": 512, 00:13:27.215 "num_blocks": 126976, 00:13:27.215 "uuid": "d2b221da-6508-4a36-8fb7-1096db4d3023", 00:13:27.215 "assigned_rate_limits": { 00:13:27.215 "rw_ios_per_sec": 0, 00:13:27.215 "rw_mbytes_per_sec": 0, 00:13:27.215 "r_mbytes_per_sec": 0, 00:13:27.215 "w_mbytes_per_sec": 0 00:13:27.215 }, 00:13:27.215 "claimed": false, 00:13:27.215 "zoned": false, 00:13:27.215 "supported_io_types": { 00:13:27.215 "read": true, 00:13:27.215 "write": true, 00:13:27.215 "unmap": false, 00:13:27.215 "flush": false, 00:13:27.215 "reset": true, 00:13:27.215 "nvme_admin": false, 00:13:27.215 "nvme_io": false, 00:13:27.215 "nvme_io_md": false, 00:13:27.215 "write_zeroes": true, 00:13:27.215 "zcopy": false, 00:13:27.215 "get_zone_info": false, 00:13:27.215 "zone_management": false, 00:13:27.215 "zone_append": false, 00:13:27.215 "compare": false, 00:13:27.215 "compare_and_write": false, 00:13:27.215 "abort": false, 00:13:27.215 "seek_hole": false, 00:13:27.215 "seek_data": false, 00:13:27.215 "copy": false, 00:13:27.215 "nvme_iov_md": false 00:13:27.215 }, 00:13:27.215 "driver_specific": { 00:13:27.215 "raid": { 00:13:27.215 "uuid": "d2b221da-6508-4a36-8fb7-1096db4d3023", 00:13:27.215 "strip_size_kb": 64, 00:13:27.215 "state": "online", 00:13:27.215 "raid_level": "raid5f", 00:13:27.215 "superblock": true, 00:13:27.215 "num_base_bdevs": 3, 00:13:27.215 "num_base_bdevs_discovered": 3, 00:13:27.215 "num_base_bdevs_operational": 3, 00:13:27.215 "base_bdevs_list": [ 00:13:27.215 { 00:13:27.215 "name": "pt1", 00:13:27.215 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:27.215 "is_configured": true, 00:13:27.215 "data_offset": 2048, 00:13:27.215 "data_size": 63488 00:13:27.215 }, 00:13:27.215 { 00:13:27.215 "name": "pt2", 00:13:27.215 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:27.215 "is_configured": true, 00:13:27.215 "data_offset": 2048, 00:13:27.215 "data_size": 63488 00:13:27.215 }, 00:13:27.215 { 00:13:27.215 "name": "pt3", 00:13:27.215 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:27.215 "is_configured": true, 00:13:27.215 "data_offset": 2048, 00:13:27.215 "data_size": 63488 00:13:27.215 } 00:13:27.215 ] 00:13:27.215 } 00:13:27.215 } 00:13:27.215 }' 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:27.215 pt2 00:13:27.215 pt3' 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.215 16:26:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.475 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.475 16:26:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.475 [2024-11-28 16:26:19.011325] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d2b221da-6508-4a36-8fb7-1096db4d3023 '!=' d2b221da-6508-4a36-8fb7-1096db4d3023 ']' 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.475 [2024-11-28 16:26:19.059125] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.475 "name": "raid_bdev1", 00:13:27.475 "uuid": "d2b221da-6508-4a36-8fb7-1096db4d3023", 00:13:27.475 "strip_size_kb": 64, 00:13:27.475 "state": "online", 00:13:27.475 "raid_level": "raid5f", 00:13:27.475 "superblock": true, 00:13:27.475 "num_base_bdevs": 3, 00:13:27.475 "num_base_bdevs_discovered": 2, 00:13:27.475 "num_base_bdevs_operational": 2, 00:13:27.475 "base_bdevs_list": [ 00:13:27.475 { 00:13:27.475 "name": null, 00:13:27.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.475 "is_configured": false, 00:13:27.475 "data_offset": 0, 00:13:27.475 "data_size": 63488 00:13:27.475 }, 00:13:27.475 { 00:13:27.475 "name": "pt2", 00:13:27.475 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:27.475 "is_configured": true, 00:13:27.475 "data_offset": 2048, 00:13:27.475 "data_size": 63488 00:13:27.475 }, 00:13:27.475 { 00:13:27.475 "name": "pt3", 00:13:27.475 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:27.475 "is_configured": true, 00:13:27.475 "data_offset": 2048, 00:13:27.475 "data_size": 63488 00:13:27.475 } 00:13:27.475 ] 00:13:27.475 }' 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.475 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.043 [2024-11-28 16:26:19.510318] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:28.043 [2024-11-28 16:26:19.510347] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:28.043 [2024-11-28 16:26:19.510393] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.043 [2024-11-28 16:26:19.510438] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:28.043 [2024-11-28 16:26:19.510446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.043 [2024-11-28 16:26:19.598188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:28.043 [2024-11-28 16:26:19.598232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.043 [2024-11-28 16:26:19.598247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:28.043 [2024-11-28 16:26:19.598255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.043 [2024-11-28 16:26:19.600249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.043 [2024-11-28 16:26:19.600284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:28.043 [2024-11-28 16:26:19.600342] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:28.043 [2024-11-28 16:26:19.600373] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:28.043 pt2 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.043 "name": "raid_bdev1", 00:13:28.043 "uuid": "d2b221da-6508-4a36-8fb7-1096db4d3023", 00:13:28.043 "strip_size_kb": 64, 00:13:28.043 "state": "configuring", 00:13:28.043 "raid_level": "raid5f", 00:13:28.043 "superblock": true, 00:13:28.043 "num_base_bdevs": 3, 00:13:28.043 "num_base_bdevs_discovered": 1, 00:13:28.043 "num_base_bdevs_operational": 2, 00:13:28.043 "base_bdevs_list": [ 00:13:28.043 { 00:13:28.043 "name": null, 00:13:28.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.043 "is_configured": false, 00:13:28.043 "data_offset": 2048, 00:13:28.043 "data_size": 63488 00:13:28.043 }, 00:13:28.043 { 00:13:28.043 "name": "pt2", 00:13:28.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:28.043 "is_configured": true, 00:13:28.043 "data_offset": 2048, 00:13:28.043 "data_size": 63488 00:13:28.043 }, 00:13:28.043 { 00:13:28.043 "name": null, 00:13:28.043 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:28.043 "is_configured": false, 00:13:28.043 "data_offset": 2048, 00:13:28.043 "data_size": 63488 00:13:28.043 } 00:13:28.043 ] 00:13:28.043 }' 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.043 16:26:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.303 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:28.303 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:28.303 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:28.303 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:28.303 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.303 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.303 [2024-11-28 16:26:20.049412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:28.303 [2024-11-28 16:26:20.049502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.303 [2024-11-28 16:26:20.049536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:28.303 [2024-11-28 16:26:20.049563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.303 [2024-11-28 16:26:20.049927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.303 [2024-11-28 16:26:20.049978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:28.303 [2024-11-28 16:26:20.050055] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:28.303 [2024-11-28 16:26:20.050106] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:28.303 [2024-11-28 16:26:20.050218] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:28.303 [2024-11-28 16:26:20.050253] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:28.303 [2024-11-28 16:26:20.050482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:28.303 [2024-11-28 16:26:20.050970] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:28.303 [2024-11-28 16:26:20.051024] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:13:28.303 [2024-11-28 16:26:20.051289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.303 pt3 00:13:28.303 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.303 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:28.303 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.303 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.303 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:28.303 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.303 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:28.303 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.303 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.303 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.303 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.303 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.303 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.303 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.303 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.562 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.562 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.562 "name": "raid_bdev1", 00:13:28.562 "uuid": "d2b221da-6508-4a36-8fb7-1096db4d3023", 00:13:28.562 "strip_size_kb": 64, 00:13:28.562 "state": "online", 00:13:28.562 "raid_level": "raid5f", 00:13:28.562 "superblock": true, 00:13:28.562 "num_base_bdevs": 3, 00:13:28.562 "num_base_bdevs_discovered": 2, 00:13:28.562 "num_base_bdevs_operational": 2, 00:13:28.562 "base_bdevs_list": [ 00:13:28.562 { 00:13:28.562 "name": null, 00:13:28.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.562 "is_configured": false, 00:13:28.562 "data_offset": 2048, 00:13:28.562 "data_size": 63488 00:13:28.562 }, 00:13:28.562 { 00:13:28.562 "name": "pt2", 00:13:28.562 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:28.562 "is_configured": true, 00:13:28.562 "data_offset": 2048, 00:13:28.562 "data_size": 63488 00:13:28.562 }, 00:13:28.562 { 00:13:28.562 "name": "pt3", 00:13:28.562 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:28.562 "is_configured": true, 00:13:28.562 "data_offset": 2048, 00:13:28.562 "data_size": 63488 00:13:28.562 } 00:13:28.562 ] 00:13:28.562 }' 00:13:28.562 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.562 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.822 [2024-11-28 16:26:20.444737] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:28.822 [2024-11-28 16:26:20.444812] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:28.822 [2024-11-28 16:26:20.444901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.822 [2024-11-28 16:26:20.444951] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:28.822 [2024-11-28 16:26:20.444964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.822 [2024-11-28 16:26:20.516614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:28.822 [2024-11-28 16:26:20.516708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.822 [2024-11-28 16:26:20.516739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:28.822 [2024-11-28 16:26:20.516775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.822 [2024-11-28 16:26:20.518882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.822 [2024-11-28 16:26:20.518948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:28.822 [2024-11-28 16:26:20.519060] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:28.822 [2024-11-28 16:26:20.519143] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:28.822 [2024-11-28 16:26:20.519282] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:28.822 [2024-11-28 16:26:20.519343] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:28.822 [2024-11-28 16:26:20.519379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:13:28.822 [2024-11-28 16:26:20.519449] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:28.822 pt1 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.822 "name": "raid_bdev1", 00:13:28.822 "uuid": "d2b221da-6508-4a36-8fb7-1096db4d3023", 00:13:28.822 "strip_size_kb": 64, 00:13:28.822 "state": "configuring", 00:13:28.822 "raid_level": "raid5f", 00:13:28.822 "superblock": true, 00:13:28.822 "num_base_bdevs": 3, 00:13:28.822 "num_base_bdevs_discovered": 1, 00:13:28.822 "num_base_bdevs_operational": 2, 00:13:28.822 "base_bdevs_list": [ 00:13:28.822 { 00:13:28.822 "name": null, 00:13:28.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.822 "is_configured": false, 00:13:28.822 "data_offset": 2048, 00:13:28.822 "data_size": 63488 00:13:28.822 }, 00:13:28.822 { 00:13:28.822 "name": "pt2", 00:13:28.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:28.822 "is_configured": true, 00:13:28.822 "data_offset": 2048, 00:13:28.822 "data_size": 63488 00:13:28.822 }, 00:13:28.822 { 00:13:28.822 "name": null, 00:13:28.822 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:28.822 "is_configured": false, 00:13:28.822 "data_offset": 2048, 00:13:28.822 "data_size": 63488 00:13:28.822 } 00:13:28.822 ] 00:13:28.822 }' 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.822 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.392 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:29.392 16:26:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:29.392 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.392 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.392 16:26:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.392 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:29.392 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:29.392 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.392 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.392 [2024-11-28 16:26:21.027818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:29.392 [2024-11-28 16:26:21.027876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.392 [2024-11-28 16:26:21.027890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:29.392 [2024-11-28 16:26:21.027900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.392 [2024-11-28 16:26:21.028238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.392 [2024-11-28 16:26:21.028259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:29.392 [2024-11-28 16:26:21.028314] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:29.392 [2024-11-28 16:26:21.028333] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:29.392 [2024-11-28 16:26:21.028408] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:13:29.392 [2024-11-28 16:26:21.028420] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:29.392 [2024-11-28 16:26:21.028628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:29.392 [2024-11-28 16:26:21.029102] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:13:29.392 [2024-11-28 16:26:21.029121] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:13:29.392 [2024-11-28 16:26:21.029268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.392 pt3 00:13:29.392 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.392 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:29.392 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.392 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.392 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:29.392 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.392 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:29.392 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.392 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.392 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.392 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.392 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.392 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.392 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.392 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.392 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.392 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.392 "name": "raid_bdev1", 00:13:29.392 "uuid": "d2b221da-6508-4a36-8fb7-1096db4d3023", 00:13:29.392 "strip_size_kb": 64, 00:13:29.392 "state": "online", 00:13:29.392 "raid_level": "raid5f", 00:13:29.392 "superblock": true, 00:13:29.392 "num_base_bdevs": 3, 00:13:29.392 "num_base_bdevs_discovered": 2, 00:13:29.392 "num_base_bdevs_operational": 2, 00:13:29.392 "base_bdevs_list": [ 00:13:29.392 { 00:13:29.392 "name": null, 00:13:29.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.392 "is_configured": false, 00:13:29.392 "data_offset": 2048, 00:13:29.392 "data_size": 63488 00:13:29.392 }, 00:13:29.392 { 00:13:29.392 "name": "pt2", 00:13:29.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:29.392 "is_configured": true, 00:13:29.392 "data_offset": 2048, 00:13:29.392 "data_size": 63488 00:13:29.392 }, 00:13:29.392 { 00:13:29.392 "name": "pt3", 00:13:29.392 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:29.392 "is_configured": true, 00:13:29.392 "data_offset": 2048, 00:13:29.392 "data_size": 63488 00:13:29.392 } 00:13:29.392 ] 00:13:29.392 }' 00:13:29.392 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.392 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.963 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:29.963 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.963 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:29.963 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.963 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.963 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:29.963 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:29.963 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:29.963 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.963 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.963 [2024-11-28 16:26:21.563086] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:29.963 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.963 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d2b221da-6508-4a36-8fb7-1096db4d3023 '!=' d2b221da-6508-4a36-8fb7-1096db4d3023 ']' 00:13:29.963 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 91662 00:13:29.963 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 91662 ']' 00:13:29.963 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 91662 00:13:29.963 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:29.963 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:29.963 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91662 00:13:29.963 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:29.963 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:29.963 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91662' 00:13:29.963 killing process with pid 91662 00:13:29.963 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 91662 00:13:29.963 [2024-11-28 16:26:21.631422] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:29.963 [2024-11-28 16:26:21.631541] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:29.963 [2024-11-28 16:26:21.631604] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:29.963 [2024-11-28 16:26:21.631614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:29.963 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 91662 00:13:29.963 [2024-11-28 16:26:21.664739] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:30.223 16:26:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:30.223 00:13:30.223 real 0m6.615s 00:13:30.223 user 0m11.134s 00:13:30.223 sys 0m1.397s 00:13:30.223 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:30.223 16:26:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.223 ************************************ 00:13:30.223 END TEST raid5f_superblock_test 00:13:30.223 ************************************ 00:13:30.223 16:26:21 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:13:30.223 16:26:21 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:13:30.223 16:26:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:30.223 16:26:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:30.223 16:26:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:30.223 ************************************ 00:13:30.223 START TEST raid5f_rebuild_test 00:13:30.223 ************************************ 00:13:30.223 16:26:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:13:30.223 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:30.223 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:30.224 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:30.224 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:30.224 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:30.224 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:30.224 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:30.224 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:30.224 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:30.224 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:30.224 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:30.224 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:30.224 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:30.224 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:30.224 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:30.224 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:30.224 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:30.224 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:30.224 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:30.484 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:30.484 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:30.484 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:30.484 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:30.484 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:30.484 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:30.484 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:30.484 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:30.484 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:30.484 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=92095 00:13:30.484 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:30.484 16:26:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 92095 00:13:30.484 16:26:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 92095 ']' 00:13:30.484 16:26:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.484 16:26:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:30.484 16:26:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.484 16:26:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:30.484 16:26:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.484 [2024-11-28 16:26:22.079585] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:30.484 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:30.484 Zero copy mechanism will not be used. 00:13:30.484 [2024-11-28 16:26:22.079828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92095 ] 00:13:30.484 [2024-11-28 16:26:22.239303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.744 [2024-11-28 16:26:22.285732] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.744 [2024-11-28 16:26:22.328709] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.744 [2024-11-28 16:26:22.328797] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:31.314 16:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.315 BaseBdev1_malloc 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.315 [2024-11-28 16:26:22.911093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:31.315 [2024-11-28 16:26:22.911160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.315 [2024-11-28 16:26:22.911190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:31.315 [2024-11-28 16:26:22.911202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.315 [2024-11-28 16:26:22.913250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.315 [2024-11-28 16:26:22.913344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:31.315 BaseBdev1 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.315 BaseBdev2_malloc 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.315 [2024-11-28 16:26:22.951221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:31.315 [2024-11-28 16:26:22.951388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.315 [2024-11-28 16:26:22.951434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:31.315 [2024-11-28 16:26:22.951452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.315 [2024-11-28 16:26:22.955345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.315 [2024-11-28 16:26:22.955406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:31.315 BaseBdev2 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.315 BaseBdev3_malloc 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.315 [2024-11-28 16:26:22.981062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:31.315 [2024-11-28 16:26:22.981118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.315 [2024-11-28 16:26:22.981140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:31.315 [2024-11-28 16:26:22.981149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.315 [2024-11-28 16:26:22.983103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.315 [2024-11-28 16:26:22.983138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:31.315 BaseBdev3 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.315 16:26:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.315 spare_malloc 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.315 spare_delay 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.315 [2024-11-28 16:26:23.021352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:31.315 [2024-11-28 16:26:23.021397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.315 [2024-11-28 16:26:23.021417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:31.315 [2024-11-28 16:26:23.021425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.315 [2024-11-28 16:26:23.023434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.315 [2024-11-28 16:26:23.023479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:31.315 spare 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.315 [2024-11-28 16:26:23.033389] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:31.315 [2024-11-28 16:26:23.035105] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:31.315 [2024-11-28 16:26:23.035169] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:31.315 [2024-11-28 16:26:23.035242] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:31.315 [2024-11-28 16:26:23.035252] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:31.315 [2024-11-28 16:26:23.035483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:31.315 [2024-11-28 16:26:23.035886] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:31.315 [2024-11-28 16:26:23.035897] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:31.315 [2024-11-28 16:26:23.036029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.315 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.575 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.575 "name": "raid_bdev1", 00:13:31.575 "uuid": "6727380c-e184-4e73-aac5-f9b2af1b44a5", 00:13:31.575 "strip_size_kb": 64, 00:13:31.575 "state": "online", 00:13:31.575 "raid_level": "raid5f", 00:13:31.575 "superblock": false, 00:13:31.575 "num_base_bdevs": 3, 00:13:31.575 "num_base_bdevs_discovered": 3, 00:13:31.575 "num_base_bdevs_operational": 3, 00:13:31.575 "base_bdevs_list": [ 00:13:31.575 { 00:13:31.575 "name": "BaseBdev1", 00:13:31.575 "uuid": "d04a78b8-bbb4-50ee-a32b-5fba1755ffc9", 00:13:31.575 "is_configured": true, 00:13:31.575 "data_offset": 0, 00:13:31.575 "data_size": 65536 00:13:31.575 }, 00:13:31.575 { 00:13:31.576 "name": "BaseBdev2", 00:13:31.576 "uuid": "97ec6465-a731-520c-8d33-b4e8c1f12f3e", 00:13:31.576 "is_configured": true, 00:13:31.576 "data_offset": 0, 00:13:31.576 "data_size": 65536 00:13:31.576 }, 00:13:31.576 { 00:13:31.576 "name": "BaseBdev3", 00:13:31.576 "uuid": "3375b798-a12b-54ec-a987-d9193ee367a3", 00:13:31.576 "is_configured": true, 00:13:31.576 "data_offset": 0, 00:13:31.576 "data_size": 65536 00:13:31.576 } 00:13:31.576 ] 00:13:31.576 }' 00:13:31.576 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.576 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.836 [2024-11-28 16:26:23.480757] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:31.836 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:32.097 [2024-11-28 16:26:23.752133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:32.097 /dev/nbd0 00:13:32.097 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:32.097 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:32.097 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:32.097 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:32.097 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:32.097 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:32.097 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:32.097 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:32.097 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:32.097 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:32.097 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.097 1+0 records in 00:13:32.097 1+0 records out 00:13:32.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000492282 s, 8.3 MB/s 00:13:32.097 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.097 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:32.097 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.097 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:32.097 16:26:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:32.097 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.097 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:32.097 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:32.097 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:32.097 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:32.097 16:26:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:13:32.357 512+0 records in 00:13:32.357 512+0 records out 00:13:32.357 67108864 bytes (67 MB, 64 MiB) copied, 0.274706 s, 244 MB/s 00:13:32.357 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:32.357 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:32.357 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:32.357 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:32.357 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:32.357 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.357 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:32.617 [2024-11-28 16:26:24.351920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.617 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:32.617 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:32.617 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:32.617 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.617 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.618 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:32.618 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:32.618 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.618 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:32.618 16:26:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.618 16:26:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.618 [2024-11-28 16:26:24.383954] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:32.878 16:26:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.878 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:32.878 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.878 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.878 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:32.878 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:32.878 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:32.878 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.878 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.878 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.878 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.878 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.878 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.878 16:26:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.878 16:26:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.878 16:26:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.878 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.878 "name": "raid_bdev1", 00:13:32.878 "uuid": "6727380c-e184-4e73-aac5-f9b2af1b44a5", 00:13:32.878 "strip_size_kb": 64, 00:13:32.878 "state": "online", 00:13:32.878 "raid_level": "raid5f", 00:13:32.878 "superblock": false, 00:13:32.878 "num_base_bdevs": 3, 00:13:32.878 "num_base_bdevs_discovered": 2, 00:13:32.878 "num_base_bdevs_operational": 2, 00:13:32.878 "base_bdevs_list": [ 00:13:32.878 { 00:13:32.878 "name": null, 00:13:32.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.878 "is_configured": false, 00:13:32.878 "data_offset": 0, 00:13:32.878 "data_size": 65536 00:13:32.878 }, 00:13:32.878 { 00:13:32.878 "name": "BaseBdev2", 00:13:32.878 "uuid": "97ec6465-a731-520c-8d33-b4e8c1f12f3e", 00:13:32.878 "is_configured": true, 00:13:32.878 "data_offset": 0, 00:13:32.878 "data_size": 65536 00:13:32.878 }, 00:13:32.878 { 00:13:32.878 "name": "BaseBdev3", 00:13:32.878 "uuid": "3375b798-a12b-54ec-a987-d9193ee367a3", 00:13:32.878 "is_configured": true, 00:13:32.878 "data_offset": 0, 00:13:32.878 "data_size": 65536 00:13:32.878 } 00:13:32.878 ] 00:13:32.878 }' 00:13:32.878 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.878 16:26:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.138 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:33.138 16:26:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.138 16:26:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.138 [2024-11-28 16:26:24.835272] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:33.138 [2024-11-28 16:26:24.839057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:13:33.138 [2024-11-28 16:26:24.841142] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:33.138 16:26:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.138 16:26:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:34.525 16:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.525 16:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.525 16:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.525 16:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.525 16:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.525 16:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.525 16:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.525 16:26:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.525 16:26:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.525 16:26:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.525 16:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.525 "name": "raid_bdev1", 00:13:34.525 "uuid": "6727380c-e184-4e73-aac5-f9b2af1b44a5", 00:13:34.525 "strip_size_kb": 64, 00:13:34.525 "state": "online", 00:13:34.525 "raid_level": "raid5f", 00:13:34.525 "superblock": false, 00:13:34.525 "num_base_bdevs": 3, 00:13:34.525 "num_base_bdevs_discovered": 3, 00:13:34.525 "num_base_bdevs_operational": 3, 00:13:34.526 "process": { 00:13:34.526 "type": "rebuild", 00:13:34.526 "target": "spare", 00:13:34.526 "progress": { 00:13:34.526 "blocks": 20480, 00:13:34.526 "percent": 15 00:13:34.526 } 00:13:34.526 }, 00:13:34.526 "base_bdevs_list": [ 00:13:34.526 { 00:13:34.526 "name": "spare", 00:13:34.526 "uuid": "2a764c0a-198b-5388-8e69-e012bf2563e4", 00:13:34.526 "is_configured": true, 00:13:34.526 "data_offset": 0, 00:13:34.526 "data_size": 65536 00:13:34.526 }, 00:13:34.526 { 00:13:34.526 "name": "BaseBdev2", 00:13:34.526 "uuid": "97ec6465-a731-520c-8d33-b4e8c1f12f3e", 00:13:34.526 "is_configured": true, 00:13:34.526 "data_offset": 0, 00:13:34.526 "data_size": 65536 00:13:34.526 }, 00:13:34.526 { 00:13:34.526 "name": "BaseBdev3", 00:13:34.526 "uuid": "3375b798-a12b-54ec-a987-d9193ee367a3", 00:13:34.526 "is_configured": true, 00:13:34.526 "data_offset": 0, 00:13:34.526 "data_size": 65536 00:13:34.526 } 00:13:34.526 ] 00:13:34.526 }' 00:13:34.526 16:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.526 16:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:34.526 16:26:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.526 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:34.526 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:34.526 16:26:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.526 16:26:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.526 [2024-11-28 16:26:26.007897] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:34.526 [2024-11-28 16:26:26.047755] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:34.526 [2024-11-28 16:26:26.047918] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.526 [2024-11-28 16:26:26.047938] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:34.526 [2024-11-28 16:26:26.047949] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:34.526 16:26:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.526 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:34.526 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.526 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.526 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:34.526 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.526 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:34.526 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.526 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.526 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.526 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.526 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.526 16:26:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.526 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.526 16:26:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.526 16:26:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.526 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.526 "name": "raid_bdev1", 00:13:34.526 "uuid": "6727380c-e184-4e73-aac5-f9b2af1b44a5", 00:13:34.526 "strip_size_kb": 64, 00:13:34.526 "state": "online", 00:13:34.526 "raid_level": "raid5f", 00:13:34.526 "superblock": false, 00:13:34.526 "num_base_bdevs": 3, 00:13:34.526 "num_base_bdevs_discovered": 2, 00:13:34.526 "num_base_bdevs_operational": 2, 00:13:34.526 "base_bdevs_list": [ 00:13:34.526 { 00:13:34.526 "name": null, 00:13:34.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.526 "is_configured": false, 00:13:34.526 "data_offset": 0, 00:13:34.526 "data_size": 65536 00:13:34.526 }, 00:13:34.526 { 00:13:34.526 "name": "BaseBdev2", 00:13:34.526 "uuid": "97ec6465-a731-520c-8d33-b4e8c1f12f3e", 00:13:34.526 "is_configured": true, 00:13:34.526 "data_offset": 0, 00:13:34.526 "data_size": 65536 00:13:34.526 }, 00:13:34.526 { 00:13:34.526 "name": "BaseBdev3", 00:13:34.526 "uuid": "3375b798-a12b-54ec-a987-d9193ee367a3", 00:13:34.526 "is_configured": true, 00:13:34.526 "data_offset": 0, 00:13:34.526 "data_size": 65536 00:13:34.526 } 00:13:34.526 ] 00:13:34.526 }' 00:13:34.526 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.526 16:26:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.795 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:34.795 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.795 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:34.795 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:34.795 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.795 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.795 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.795 16:26:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.795 16:26:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.795 16:26:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.795 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.795 "name": "raid_bdev1", 00:13:34.795 "uuid": "6727380c-e184-4e73-aac5-f9b2af1b44a5", 00:13:34.795 "strip_size_kb": 64, 00:13:34.795 "state": "online", 00:13:34.795 "raid_level": "raid5f", 00:13:34.795 "superblock": false, 00:13:34.795 "num_base_bdevs": 3, 00:13:34.795 "num_base_bdevs_discovered": 2, 00:13:34.795 "num_base_bdevs_operational": 2, 00:13:34.795 "base_bdevs_list": [ 00:13:34.795 { 00:13:34.795 "name": null, 00:13:34.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.795 "is_configured": false, 00:13:34.795 "data_offset": 0, 00:13:34.795 "data_size": 65536 00:13:34.795 }, 00:13:34.795 { 00:13:34.795 "name": "BaseBdev2", 00:13:34.795 "uuid": "97ec6465-a731-520c-8d33-b4e8c1f12f3e", 00:13:34.795 "is_configured": true, 00:13:34.795 "data_offset": 0, 00:13:34.795 "data_size": 65536 00:13:34.795 }, 00:13:34.795 { 00:13:34.795 "name": "BaseBdev3", 00:13:34.795 "uuid": "3375b798-a12b-54ec-a987-d9193ee367a3", 00:13:34.795 "is_configured": true, 00:13:34.795 "data_offset": 0, 00:13:34.795 "data_size": 65536 00:13:34.795 } 00:13:34.795 ] 00:13:34.795 }' 00:13:34.795 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.061 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:35.061 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.061 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:35.061 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:35.061 16:26:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.061 16:26:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.061 [2024-11-28 16:26:26.636286] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:35.061 [2024-11-28 16:26:26.639349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:13:35.061 [2024-11-28 16:26:26.641465] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:35.061 16:26:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.061 16:26:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:36.011 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.011 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.011 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.011 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.011 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.011 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.011 16:26:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.011 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.011 16:26:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.011 16:26:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.011 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.011 "name": "raid_bdev1", 00:13:36.011 "uuid": "6727380c-e184-4e73-aac5-f9b2af1b44a5", 00:13:36.011 "strip_size_kb": 64, 00:13:36.011 "state": "online", 00:13:36.011 "raid_level": "raid5f", 00:13:36.011 "superblock": false, 00:13:36.011 "num_base_bdevs": 3, 00:13:36.011 "num_base_bdevs_discovered": 3, 00:13:36.011 "num_base_bdevs_operational": 3, 00:13:36.011 "process": { 00:13:36.011 "type": "rebuild", 00:13:36.011 "target": "spare", 00:13:36.011 "progress": { 00:13:36.011 "blocks": 20480, 00:13:36.011 "percent": 15 00:13:36.011 } 00:13:36.011 }, 00:13:36.011 "base_bdevs_list": [ 00:13:36.011 { 00:13:36.011 "name": "spare", 00:13:36.011 "uuid": "2a764c0a-198b-5388-8e69-e012bf2563e4", 00:13:36.011 "is_configured": true, 00:13:36.011 "data_offset": 0, 00:13:36.011 "data_size": 65536 00:13:36.011 }, 00:13:36.011 { 00:13:36.011 "name": "BaseBdev2", 00:13:36.011 "uuid": "97ec6465-a731-520c-8d33-b4e8c1f12f3e", 00:13:36.011 "is_configured": true, 00:13:36.011 "data_offset": 0, 00:13:36.011 "data_size": 65536 00:13:36.011 }, 00:13:36.011 { 00:13:36.011 "name": "BaseBdev3", 00:13:36.011 "uuid": "3375b798-a12b-54ec-a987-d9193ee367a3", 00:13:36.011 "is_configured": true, 00:13:36.011 "data_offset": 0, 00:13:36.011 "data_size": 65536 00:13:36.011 } 00:13:36.012 ] 00:13:36.012 }' 00:13:36.012 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.012 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.012 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.272 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.272 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:36.272 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:36.272 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:36.272 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=446 00:13:36.272 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:36.272 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.272 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.272 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.272 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.272 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.272 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.272 16:26:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.272 16:26:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.272 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.272 16:26:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.272 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.272 "name": "raid_bdev1", 00:13:36.272 "uuid": "6727380c-e184-4e73-aac5-f9b2af1b44a5", 00:13:36.272 "strip_size_kb": 64, 00:13:36.272 "state": "online", 00:13:36.272 "raid_level": "raid5f", 00:13:36.272 "superblock": false, 00:13:36.272 "num_base_bdevs": 3, 00:13:36.272 "num_base_bdevs_discovered": 3, 00:13:36.272 "num_base_bdevs_operational": 3, 00:13:36.272 "process": { 00:13:36.272 "type": "rebuild", 00:13:36.272 "target": "spare", 00:13:36.272 "progress": { 00:13:36.272 "blocks": 22528, 00:13:36.272 "percent": 17 00:13:36.272 } 00:13:36.272 }, 00:13:36.272 "base_bdevs_list": [ 00:13:36.272 { 00:13:36.272 "name": "spare", 00:13:36.272 "uuid": "2a764c0a-198b-5388-8e69-e012bf2563e4", 00:13:36.272 "is_configured": true, 00:13:36.272 "data_offset": 0, 00:13:36.272 "data_size": 65536 00:13:36.272 }, 00:13:36.272 { 00:13:36.272 "name": "BaseBdev2", 00:13:36.272 "uuid": "97ec6465-a731-520c-8d33-b4e8c1f12f3e", 00:13:36.272 "is_configured": true, 00:13:36.272 "data_offset": 0, 00:13:36.272 "data_size": 65536 00:13:36.272 }, 00:13:36.272 { 00:13:36.272 "name": "BaseBdev3", 00:13:36.272 "uuid": "3375b798-a12b-54ec-a987-d9193ee367a3", 00:13:36.272 "is_configured": true, 00:13:36.273 "data_offset": 0, 00:13:36.273 "data_size": 65536 00:13:36.273 } 00:13:36.273 ] 00:13:36.273 }' 00:13:36.273 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.273 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.273 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.273 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.273 16:26:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:37.211 16:26:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:37.211 16:26:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.211 16:26:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.211 16:26:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.211 16:26:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.211 16:26:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.211 16:26:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.211 16:26:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.211 16:26:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.211 16:26:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.471 16:26:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.471 16:26:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.471 "name": "raid_bdev1", 00:13:37.471 "uuid": "6727380c-e184-4e73-aac5-f9b2af1b44a5", 00:13:37.471 "strip_size_kb": 64, 00:13:37.471 "state": "online", 00:13:37.471 "raid_level": "raid5f", 00:13:37.471 "superblock": false, 00:13:37.471 "num_base_bdevs": 3, 00:13:37.471 "num_base_bdevs_discovered": 3, 00:13:37.471 "num_base_bdevs_operational": 3, 00:13:37.471 "process": { 00:13:37.471 "type": "rebuild", 00:13:37.471 "target": "spare", 00:13:37.471 "progress": { 00:13:37.471 "blocks": 47104, 00:13:37.471 "percent": 35 00:13:37.471 } 00:13:37.471 }, 00:13:37.471 "base_bdevs_list": [ 00:13:37.471 { 00:13:37.471 "name": "spare", 00:13:37.471 "uuid": "2a764c0a-198b-5388-8e69-e012bf2563e4", 00:13:37.471 "is_configured": true, 00:13:37.471 "data_offset": 0, 00:13:37.471 "data_size": 65536 00:13:37.471 }, 00:13:37.471 { 00:13:37.471 "name": "BaseBdev2", 00:13:37.471 "uuid": "97ec6465-a731-520c-8d33-b4e8c1f12f3e", 00:13:37.471 "is_configured": true, 00:13:37.471 "data_offset": 0, 00:13:37.471 "data_size": 65536 00:13:37.471 }, 00:13:37.471 { 00:13:37.471 "name": "BaseBdev3", 00:13:37.471 "uuid": "3375b798-a12b-54ec-a987-d9193ee367a3", 00:13:37.471 "is_configured": true, 00:13:37.472 "data_offset": 0, 00:13:37.472 "data_size": 65536 00:13:37.472 } 00:13:37.472 ] 00:13:37.472 }' 00:13:37.472 16:26:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.472 16:26:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.472 16:26:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.472 16:26:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.472 16:26:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:38.411 16:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:38.411 16:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.411 16:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.411 16:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.411 16:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.411 16:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.411 16:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.411 16:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.411 16:26:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.411 16:26:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.411 16:26:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.411 16:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.411 "name": "raid_bdev1", 00:13:38.411 "uuid": "6727380c-e184-4e73-aac5-f9b2af1b44a5", 00:13:38.411 "strip_size_kb": 64, 00:13:38.411 "state": "online", 00:13:38.411 "raid_level": "raid5f", 00:13:38.411 "superblock": false, 00:13:38.411 "num_base_bdevs": 3, 00:13:38.411 "num_base_bdevs_discovered": 3, 00:13:38.411 "num_base_bdevs_operational": 3, 00:13:38.411 "process": { 00:13:38.411 "type": "rebuild", 00:13:38.411 "target": "spare", 00:13:38.411 "progress": { 00:13:38.411 "blocks": 69632, 00:13:38.411 "percent": 53 00:13:38.411 } 00:13:38.411 }, 00:13:38.411 "base_bdevs_list": [ 00:13:38.411 { 00:13:38.411 "name": "spare", 00:13:38.411 "uuid": "2a764c0a-198b-5388-8e69-e012bf2563e4", 00:13:38.411 "is_configured": true, 00:13:38.411 "data_offset": 0, 00:13:38.411 "data_size": 65536 00:13:38.411 }, 00:13:38.411 { 00:13:38.411 "name": "BaseBdev2", 00:13:38.411 "uuid": "97ec6465-a731-520c-8d33-b4e8c1f12f3e", 00:13:38.411 "is_configured": true, 00:13:38.411 "data_offset": 0, 00:13:38.411 "data_size": 65536 00:13:38.411 }, 00:13:38.411 { 00:13:38.411 "name": "BaseBdev3", 00:13:38.411 "uuid": "3375b798-a12b-54ec-a987-d9193ee367a3", 00:13:38.411 "is_configured": true, 00:13:38.411 "data_offset": 0, 00:13:38.411 "data_size": 65536 00:13:38.411 } 00:13:38.411 ] 00:13:38.411 }' 00:13:38.411 16:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.671 16:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.671 16:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.672 16:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.672 16:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:39.612 16:26:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.612 16:26:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.612 16:26:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.612 16:26:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.612 16:26:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.612 16:26:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.612 16:26:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.612 16:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.612 16:26:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.612 16:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.612 16:26:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.612 16:26:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.612 "name": "raid_bdev1", 00:13:39.612 "uuid": "6727380c-e184-4e73-aac5-f9b2af1b44a5", 00:13:39.613 "strip_size_kb": 64, 00:13:39.613 "state": "online", 00:13:39.613 "raid_level": "raid5f", 00:13:39.613 "superblock": false, 00:13:39.613 "num_base_bdevs": 3, 00:13:39.613 "num_base_bdevs_discovered": 3, 00:13:39.613 "num_base_bdevs_operational": 3, 00:13:39.613 "process": { 00:13:39.613 "type": "rebuild", 00:13:39.613 "target": "spare", 00:13:39.613 "progress": { 00:13:39.613 "blocks": 92160, 00:13:39.613 "percent": 70 00:13:39.613 } 00:13:39.613 }, 00:13:39.613 "base_bdevs_list": [ 00:13:39.613 { 00:13:39.613 "name": "spare", 00:13:39.613 "uuid": "2a764c0a-198b-5388-8e69-e012bf2563e4", 00:13:39.613 "is_configured": true, 00:13:39.613 "data_offset": 0, 00:13:39.613 "data_size": 65536 00:13:39.613 }, 00:13:39.613 { 00:13:39.613 "name": "BaseBdev2", 00:13:39.613 "uuid": "97ec6465-a731-520c-8d33-b4e8c1f12f3e", 00:13:39.613 "is_configured": true, 00:13:39.613 "data_offset": 0, 00:13:39.613 "data_size": 65536 00:13:39.613 }, 00:13:39.613 { 00:13:39.613 "name": "BaseBdev3", 00:13:39.613 "uuid": "3375b798-a12b-54ec-a987-d9193ee367a3", 00:13:39.613 "is_configured": true, 00:13:39.613 "data_offset": 0, 00:13:39.613 "data_size": 65536 00:13:39.613 } 00:13:39.613 ] 00:13:39.613 }' 00:13:39.613 16:26:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.613 16:26:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.613 16:26:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.613 16:26:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.613 16:26:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:40.995 16:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:40.995 16:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.995 16:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.995 16:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.995 16:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.996 16:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.996 16:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.996 16:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.996 16:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.996 16:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.996 16:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.996 16:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.996 "name": "raid_bdev1", 00:13:40.996 "uuid": "6727380c-e184-4e73-aac5-f9b2af1b44a5", 00:13:40.996 "strip_size_kb": 64, 00:13:40.996 "state": "online", 00:13:40.996 "raid_level": "raid5f", 00:13:40.996 "superblock": false, 00:13:40.996 "num_base_bdevs": 3, 00:13:40.996 "num_base_bdevs_discovered": 3, 00:13:40.996 "num_base_bdevs_operational": 3, 00:13:40.996 "process": { 00:13:40.996 "type": "rebuild", 00:13:40.996 "target": "spare", 00:13:40.996 "progress": { 00:13:40.996 "blocks": 116736, 00:13:40.996 "percent": 89 00:13:40.996 } 00:13:40.996 }, 00:13:40.996 "base_bdevs_list": [ 00:13:40.996 { 00:13:40.996 "name": "spare", 00:13:40.996 "uuid": "2a764c0a-198b-5388-8e69-e012bf2563e4", 00:13:40.996 "is_configured": true, 00:13:40.996 "data_offset": 0, 00:13:40.996 "data_size": 65536 00:13:40.996 }, 00:13:40.996 { 00:13:40.996 "name": "BaseBdev2", 00:13:40.996 "uuid": "97ec6465-a731-520c-8d33-b4e8c1f12f3e", 00:13:40.996 "is_configured": true, 00:13:40.996 "data_offset": 0, 00:13:40.996 "data_size": 65536 00:13:40.996 }, 00:13:40.996 { 00:13:40.996 "name": "BaseBdev3", 00:13:40.996 "uuid": "3375b798-a12b-54ec-a987-d9193ee367a3", 00:13:40.996 "is_configured": true, 00:13:40.996 "data_offset": 0, 00:13:40.996 "data_size": 65536 00:13:40.996 } 00:13:40.996 ] 00:13:40.996 }' 00:13:40.996 16:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.996 16:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.996 16:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.996 16:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.996 16:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:41.566 [2024-11-28 16:26:33.073964] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:41.566 [2024-11-28 16:26:33.074021] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:41.566 [2024-11-28 16:26:33.074064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.826 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.826 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.826 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.826 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.826 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.826 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.826 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.826 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.826 16:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.826 16:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.826 16:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.826 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.826 "name": "raid_bdev1", 00:13:41.826 "uuid": "6727380c-e184-4e73-aac5-f9b2af1b44a5", 00:13:41.826 "strip_size_kb": 64, 00:13:41.826 "state": "online", 00:13:41.826 "raid_level": "raid5f", 00:13:41.826 "superblock": false, 00:13:41.826 "num_base_bdevs": 3, 00:13:41.826 "num_base_bdevs_discovered": 3, 00:13:41.826 "num_base_bdevs_operational": 3, 00:13:41.826 "base_bdevs_list": [ 00:13:41.826 { 00:13:41.826 "name": "spare", 00:13:41.826 "uuid": "2a764c0a-198b-5388-8e69-e012bf2563e4", 00:13:41.826 "is_configured": true, 00:13:41.826 "data_offset": 0, 00:13:41.826 "data_size": 65536 00:13:41.826 }, 00:13:41.826 { 00:13:41.826 "name": "BaseBdev2", 00:13:41.826 "uuid": "97ec6465-a731-520c-8d33-b4e8c1f12f3e", 00:13:41.826 "is_configured": true, 00:13:41.826 "data_offset": 0, 00:13:41.826 "data_size": 65536 00:13:41.826 }, 00:13:41.826 { 00:13:41.826 "name": "BaseBdev3", 00:13:41.826 "uuid": "3375b798-a12b-54ec-a987-d9193ee367a3", 00:13:41.826 "is_configured": true, 00:13:41.826 "data_offset": 0, 00:13:41.826 "data_size": 65536 00:13:41.826 } 00:13:41.826 ] 00:13:41.826 }' 00:13:41.826 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.087 "name": "raid_bdev1", 00:13:42.087 "uuid": "6727380c-e184-4e73-aac5-f9b2af1b44a5", 00:13:42.087 "strip_size_kb": 64, 00:13:42.087 "state": "online", 00:13:42.087 "raid_level": "raid5f", 00:13:42.087 "superblock": false, 00:13:42.087 "num_base_bdevs": 3, 00:13:42.087 "num_base_bdevs_discovered": 3, 00:13:42.087 "num_base_bdevs_operational": 3, 00:13:42.087 "base_bdevs_list": [ 00:13:42.087 { 00:13:42.087 "name": "spare", 00:13:42.087 "uuid": "2a764c0a-198b-5388-8e69-e012bf2563e4", 00:13:42.087 "is_configured": true, 00:13:42.087 "data_offset": 0, 00:13:42.087 "data_size": 65536 00:13:42.087 }, 00:13:42.087 { 00:13:42.087 "name": "BaseBdev2", 00:13:42.087 "uuid": "97ec6465-a731-520c-8d33-b4e8c1f12f3e", 00:13:42.087 "is_configured": true, 00:13:42.087 "data_offset": 0, 00:13:42.087 "data_size": 65536 00:13:42.087 }, 00:13:42.087 { 00:13:42.087 "name": "BaseBdev3", 00:13:42.087 "uuid": "3375b798-a12b-54ec-a987-d9193ee367a3", 00:13:42.087 "is_configured": true, 00:13:42.087 "data_offset": 0, 00:13:42.087 "data_size": 65536 00:13:42.087 } 00:13:42.087 ] 00:13:42.087 }' 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.087 "name": "raid_bdev1", 00:13:42.087 "uuid": "6727380c-e184-4e73-aac5-f9b2af1b44a5", 00:13:42.087 "strip_size_kb": 64, 00:13:42.087 "state": "online", 00:13:42.087 "raid_level": "raid5f", 00:13:42.087 "superblock": false, 00:13:42.087 "num_base_bdevs": 3, 00:13:42.087 "num_base_bdevs_discovered": 3, 00:13:42.087 "num_base_bdevs_operational": 3, 00:13:42.087 "base_bdevs_list": [ 00:13:42.087 { 00:13:42.087 "name": "spare", 00:13:42.087 "uuid": "2a764c0a-198b-5388-8e69-e012bf2563e4", 00:13:42.087 "is_configured": true, 00:13:42.087 "data_offset": 0, 00:13:42.087 "data_size": 65536 00:13:42.087 }, 00:13:42.087 { 00:13:42.087 "name": "BaseBdev2", 00:13:42.087 "uuid": "97ec6465-a731-520c-8d33-b4e8c1f12f3e", 00:13:42.087 "is_configured": true, 00:13:42.087 "data_offset": 0, 00:13:42.087 "data_size": 65536 00:13:42.087 }, 00:13:42.087 { 00:13:42.087 "name": "BaseBdev3", 00:13:42.087 "uuid": "3375b798-a12b-54ec-a987-d9193ee367a3", 00:13:42.087 "is_configured": true, 00:13:42.087 "data_offset": 0, 00:13:42.087 "data_size": 65536 00:13:42.087 } 00:13:42.087 ] 00:13:42.087 }' 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.087 16:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.657 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:42.657 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.657 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.657 [2024-11-28 16:26:34.212935] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:42.657 [2024-11-28 16:26:34.213011] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.657 [2024-11-28 16:26:34.213127] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.657 [2024-11-28 16:26:34.213245] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.657 [2024-11-28 16:26:34.213288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:42.657 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.657 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:42.658 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.658 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.658 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.658 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.658 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:42.658 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:42.658 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:42.658 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:42.658 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:42.658 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:42.658 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:42.658 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:42.658 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:42.658 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:42.658 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:42.658 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:42.658 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:42.918 /dev/nbd0 00:13:42.918 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:42.918 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:42.918 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:42.918 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:42.918 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:42.918 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:42.918 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:42.918 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:42.918 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:42.918 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:42.918 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:42.918 1+0 records in 00:13:42.918 1+0 records out 00:13:42.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376812 s, 10.9 MB/s 00:13:42.918 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.918 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:42.918 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.918 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:42.918 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:42.918 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:42.918 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:42.918 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:43.178 /dev/nbd1 00:13:43.178 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:43.178 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:43.178 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:43.178 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:43.178 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:43.178 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:43.178 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:43.178 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:43.178 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:43.178 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:43.178 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:43.178 1+0 records in 00:13:43.178 1+0 records out 00:13:43.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385639 s, 10.6 MB/s 00:13:43.178 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.178 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:43.178 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.178 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:43.178 16:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:43.178 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:43.179 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:43.179 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:43.179 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:43.179 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.179 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:43.179 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:43.179 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:43.179 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.179 16:26:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:43.438 16:26:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:43.438 16:26:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:43.438 16:26:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:43.438 16:26:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.438 16:26:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.438 16:26:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:43.438 16:26:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:43.438 16:26:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.438 16:26:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.438 16:26:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:43.698 16:26:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:43.698 16:26:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:43.698 16:26:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:43.698 16:26:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.698 16:26:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.698 16:26:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:43.698 16:26:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:43.698 16:26:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.698 16:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:43.698 16:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 92095 00:13:43.698 16:26:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 92095 ']' 00:13:43.698 16:26:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 92095 00:13:43.698 16:26:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:13:43.698 16:26:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:43.698 16:26:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92095 00:13:43.698 16:26:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:43.698 16:26:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:43.698 16:26:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92095' 00:13:43.698 killing process with pid 92095 00:13:43.698 Received shutdown signal, test time was about 60.000000 seconds 00:13:43.698 00:13:43.698 Latency(us) 00:13:43.698 [2024-11-28T16:26:35.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.698 [2024-11-28T16:26:35.469Z] =================================================================================================================== 00:13:43.698 [2024-11-28T16:26:35.469Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:43.698 16:26:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 92095 00:13:43.698 [2024-11-28 16:26:35.299570] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:43.698 16:26:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 92095 00:13:43.698 [2024-11-28 16:26:35.375108] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:44.269 00:13:44.269 real 0m13.750s 00:13:44.269 user 0m17.250s 00:13:44.269 sys 0m1.922s 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:44.269 ************************************ 00:13:44.269 END TEST raid5f_rebuild_test 00:13:44.269 ************************************ 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.269 16:26:35 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:13:44.269 16:26:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:44.269 16:26:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:44.269 16:26:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:44.269 ************************************ 00:13:44.269 START TEST raid5f_rebuild_test_sb 00:13:44.269 ************************************ 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=92520 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 92520 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 92520 ']' 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:44.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.269 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.270 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:44.270 16:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.270 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:44.270 Zero copy mechanism will not be used. 00:13:44.270 [2024-11-28 16:26:35.909824] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:44.270 [2024-11-28 16:26:35.909976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92520 ] 00:13:44.529 [2024-11-28 16:26:36.070208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.529 [2024-11-28 16:26:36.140893] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.529 [2024-11-28 16:26:36.217755] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:44.529 [2024-11-28 16:26:36.217785] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.098 BaseBdev1_malloc 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.098 [2024-11-28 16:26:36.756136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:45.098 [2024-11-28 16:26:36.756206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.098 [2024-11-28 16:26:36.756234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:45.098 [2024-11-28 16:26:36.756260] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.098 [2024-11-28 16:26:36.758630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.098 [2024-11-28 16:26:36.758663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:45.098 BaseBdev1 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.098 BaseBdev2_malloc 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.098 [2024-11-28 16:26:36.806806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:45.098 [2024-11-28 16:26:36.806960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.098 [2024-11-28 16:26:36.807013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:45.098 [2024-11-28 16:26:36.807032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.098 [2024-11-28 16:26:36.811332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.098 [2024-11-28 16:26:36.811390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:45.098 BaseBdev2 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.098 BaseBdev3_malloc 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.098 [2024-11-28 16:26:36.843455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:45.098 [2024-11-28 16:26:36.843498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.098 [2024-11-28 16:26:36.843522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:45.098 [2024-11-28 16:26:36.843531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.098 [2024-11-28 16:26:36.845772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.098 [2024-11-28 16:26:36.845805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:45.098 BaseBdev3 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.098 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.358 spare_malloc 00:13:45.358 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.358 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:45.358 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.358 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.358 spare_delay 00:13:45.358 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.358 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:45.358 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.358 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.358 [2024-11-28 16:26:36.890312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:45.358 [2024-11-28 16:26:36.890357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.358 [2024-11-28 16:26:36.890381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:45.358 [2024-11-28 16:26:36.890389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.358 [2024-11-28 16:26:36.892656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.358 [2024-11-28 16:26:36.892688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:45.358 spare 00:13:45.358 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.358 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:45.358 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.358 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.358 [2024-11-28 16:26:36.902368] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:45.358 [2024-11-28 16:26:36.904362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:45.358 [2024-11-28 16:26:36.904509] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.358 [2024-11-28 16:26:36.904669] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:45.358 [2024-11-28 16:26:36.904687] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:45.358 [2024-11-28 16:26:36.904946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:45.358 [2024-11-28 16:26:36.905349] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:45.358 [2024-11-28 16:26:36.905372] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:45.358 [2024-11-28 16:26:36.905487] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.358 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.358 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:45.358 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.358 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.358 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:45.358 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.358 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.358 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.358 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.358 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.359 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.359 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.359 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.359 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.359 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.359 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.359 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.359 "name": "raid_bdev1", 00:13:45.359 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:13:45.359 "strip_size_kb": 64, 00:13:45.359 "state": "online", 00:13:45.359 "raid_level": "raid5f", 00:13:45.359 "superblock": true, 00:13:45.359 "num_base_bdevs": 3, 00:13:45.359 "num_base_bdevs_discovered": 3, 00:13:45.359 "num_base_bdevs_operational": 3, 00:13:45.359 "base_bdevs_list": [ 00:13:45.359 { 00:13:45.359 "name": "BaseBdev1", 00:13:45.359 "uuid": "fcb686dc-c6bd-5cb6-858f-b4e1693035fa", 00:13:45.359 "is_configured": true, 00:13:45.359 "data_offset": 2048, 00:13:45.359 "data_size": 63488 00:13:45.359 }, 00:13:45.359 { 00:13:45.359 "name": "BaseBdev2", 00:13:45.359 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:13:45.359 "is_configured": true, 00:13:45.359 "data_offset": 2048, 00:13:45.359 "data_size": 63488 00:13:45.359 }, 00:13:45.359 { 00:13:45.359 "name": "BaseBdev3", 00:13:45.359 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:13:45.359 "is_configured": true, 00:13:45.359 "data_offset": 2048, 00:13:45.359 "data_size": 63488 00:13:45.359 } 00:13:45.359 ] 00:13:45.359 }' 00:13:45.359 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.359 16:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.619 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:45.619 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:45.619 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.619 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.619 [2024-11-28 16:26:37.383032] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:45.879 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.879 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:13:45.879 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.879 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.879 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.879 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:45.879 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.879 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:45.879 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:45.879 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:45.879 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:45.879 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:45.879 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:45.879 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:45.879 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:45.879 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:45.879 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:45.879 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:45.879 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:45.879 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:45.879 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:45.879 [2024-11-28 16:26:37.642420] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:46.139 /dev/nbd0 00:13:46.139 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:46.139 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:46.139 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:46.139 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:46.139 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:46.139 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:46.139 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:46.139 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:46.139 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:46.139 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:46.139 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:46.139 1+0 records in 00:13:46.139 1+0 records out 00:13:46.139 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623949 s, 6.6 MB/s 00:13:46.139 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.139 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:46.139 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.139 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:46.139 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:46.139 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:46.139 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:46.139 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:46.139 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:46.139 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:46.139 16:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:13:46.399 496+0 records in 00:13:46.399 496+0 records out 00:13:46.399 65011712 bytes (65 MB, 62 MiB) copied, 0.321424 s, 202 MB/s 00:13:46.399 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:46.399 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:46.399 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:46.399 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:46.399 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:46.399 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:46.399 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:46.659 [2024-11-28 16:26:38.254940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.659 [2024-11-28 16:26:38.282999] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.659 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.659 "name": "raid_bdev1", 00:13:46.659 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:13:46.659 "strip_size_kb": 64, 00:13:46.659 "state": "online", 00:13:46.659 "raid_level": "raid5f", 00:13:46.659 "superblock": true, 00:13:46.659 "num_base_bdevs": 3, 00:13:46.659 "num_base_bdevs_discovered": 2, 00:13:46.659 "num_base_bdevs_operational": 2, 00:13:46.659 "base_bdevs_list": [ 00:13:46.659 { 00:13:46.659 "name": null, 00:13:46.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.659 "is_configured": false, 00:13:46.659 "data_offset": 0, 00:13:46.659 "data_size": 63488 00:13:46.659 }, 00:13:46.659 { 00:13:46.659 "name": "BaseBdev2", 00:13:46.659 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:13:46.659 "is_configured": true, 00:13:46.659 "data_offset": 2048, 00:13:46.659 "data_size": 63488 00:13:46.659 }, 00:13:46.659 { 00:13:46.659 "name": "BaseBdev3", 00:13:46.659 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:13:46.659 "is_configured": true, 00:13:46.659 "data_offset": 2048, 00:13:46.659 "data_size": 63488 00:13:46.659 } 00:13:46.659 ] 00:13:46.659 }' 00:13:46.660 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.660 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.228 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:47.228 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.228 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.228 [2024-11-28 16:26:38.726235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:47.228 [2024-11-28 16:26:38.732692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028de0 00:13:47.228 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.228 16:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:47.228 [2024-11-28 16:26:38.735048] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:48.166 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.166 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.166 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.166 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.166 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.166 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.166 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.166 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.166 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.166 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.166 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.166 "name": "raid_bdev1", 00:13:48.166 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:13:48.166 "strip_size_kb": 64, 00:13:48.166 "state": "online", 00:13:48.166 "raid_level": "raid5f", 00:13:48.166 "superblock": true, 00:13:48.166 "num_base_bdevs": 3, 00:13:48.166 "num_base_bdevs_discovered": 3, 00:13:48.166 "num_base_bdevs_operational": 3, 00:13:48.166 "process": { 00:13:48.166 "type": "rebuild", 00:13:48.166 "target": "spare", 00:13:48.166 "progress": { 00:13:48.166 "blocks": 20480, 00:13:48.166 "percent": 16 00:13:48.166 } 00:13:48.166 }, 00:13:48.166 "base_bdevs_list": [ 00:13:48.166 { 00:13:48.166 "name": "spare", 00:13:48.166 "uuid": "c0378789-621a-55bf-9d22-6d29b2bdae68", 00:13:48.166 "is_configured": true, 00:13:48.166 "data_offset": 2048, 00:13:48.166 "data_size": 63488 00:13:48.166 }, 00:13:48.166 { 00:13:48.166 "name": "BaseBdev2", 00:13:48.166 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:13:48.166 "is_configured": true, 00:13:48.166 "data_offset": 2048, 00:13:48.166 "data_size": 63488 00:13:48.166 }, 00:13:48.166 { 00:13:48.166 "name": "BaseBdev3", 00:13:48.166 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:13:48.166 "is_configured": true, 00:13:48.166 "data_offset": 2048, 00:13:48.166 "data_size": 63488 00:13:48.166 } 00:13:48.166 ] 00:13:48.166 }' 00:13:48.166 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.166 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:48.166 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.166 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:48.166 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:48.166 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.166 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.166 [2024-11-28 16:26:39.894311] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:48.426 [2024-11-28 16:26:39.942990] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:48.426 [2024-11-28 16:26:39.943045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.426 [2024-11-28 16:26:39.943061] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:48.426 [2024-11-28 16:26:39.943074] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:48.426 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.426 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:48.426 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.426 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.426 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.426 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.426 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:48.426 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.426 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.426 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.426 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.426 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.426 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.426 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.426 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.426 16:26:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.426 16:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.426 "name": "raid_bdev1", 00:13:48.426 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:13:48.426 "strip_size_kb": 64, 00:13:48.426 "state": "online", 00:13:48.426 "raid_level": "raid5f", 00:13:48.426 "superblock": true, 00:13:48.426 "num_base_bdevs": 3, 00:13:48.426 "num_base_bdevs_discovered": 2, 00:13:48.426 "num_base_bdevs_operational": 2, 00:13:48.426 "base_bdevs_list": [ 00:13:48.426 { 00:13:48.426 "name": null, 00:13:48.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.426 "is_configured": false, 00:13:48.426 "data_offset": 0, 00:13:48.426 "data_size": 63488 00:13:48.426 }, 00:13:48.426 { 00:13:48.426 "name": "BaseBdev2", 00:13:48.426 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:13:48.426 "is_configured": true, 00:13:48.426 "data_offset": 2048, 00:13:48.426 "data_size": 63488 00:13:48.426 }, 00:13:48.426 { 00:13:48.426 "name": "BaseBdev3", 00:13:48.426 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:13:48.426 "is_configured": true, 00:13:48.426 "data_offset": 2048, 00:13:48.426 "data_size": 63488 00:13:48.426 } 00:13:48.426 ] 00:13:48.426 }' 00:13:48.426 16:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.426 16:26:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.688 16:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:48.688 16:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.688 16:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:48.688 16:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:48.688 16:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.688 16:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.688 16:26:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.688 16:26:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.688 16:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.688 16:26:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.688 16:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.688 "name": "raid_bdev1", 00:13:48.688 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:13:48.688 "strip_size_kb": 64, 00:13:48.688 "state": "online", 00:13:48.688 "raid_level": "raid5f", 00:13:48.688 "superblock": true, 00:13:48.688 "num_base_bdevs": 3, 00:13:48.688 "num_base_bdevs_discovered": 2, 00:13:48.688 "num_base_bdevs_operational": 2, 00:13:48.688 "base_bdevs_list": [ 00:13:48.688 { 00:13:48.688 "name": null, 00:13:48.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.688 "is_configured": false, 00:13:48.688 "data_offset": 0, 00:13:48.688 "data_size": 63488 00:13:48.688 }, 00:13:48.688 { 00:13:48.688 "name": "BaseBdev2", 00:13:48.688 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:13:48.688 "is_configured": true, 00:13:48.688 "data_offset": 2048, 00:13:48.688 "data_size": 63488 00:13:48.688 }, 00:13:48.688 { 00:13:48.688 "name": "BaseBdev3", 00:13:48.688 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:13:48.688 "is_configured": true, 00:13:48.688 "data_offset": 2048, 00:13:48.688 "data_size": 63488 00:13:48.688 } 00:13:48.688 ] 00:13:48.688 }' 00:13:48.688 16:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.688 16:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:48.688 16:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.951 16:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:48.951 16:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:48.951 16:26:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.951 16:26:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.951 [2024-11-28 16:26:40.502704] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:48.951 [2024-11-28 16:26:40.507138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028eb0 00:13:48.951 16:26:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.951 16:26:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:48.951 [2024-11-28 16:26:40.509543] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:49.889 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.889 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.889 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.889 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.889 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.889 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.889 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.889 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.889 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.889 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.889 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.889 "name": "raid_bdev1", 00:13:49.889 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:13:49.889 "strip_size_kb": 64, 00:13:49.889 "state": "online", 00:13:49.889 "raid_level": "raid5f", 00:13:49.889 "superblock": true, 00:13:49.889 "num_base_bdevs": 3, 00:13:49.889 "num_base_bdevs_discovered": 3, 00:13:49.889 "num_base_bdevs_operational": 3, 00:13:49.889 "process": { 00:13:49.889 "type": "rebuild", 00:13:49.889 "target": "spare", 00:13:49.889 "progress": { 00:13:49.889 "blocks": 20480, 00:13:49.889 "percent": 16 00:13:49.889 } 00:13:49.889 }, 00:13:49.889 "base_bdevs_list": [ 00:13:49.889 { 00:13:49.889 "name": "spare", 00:13:49.889 "uuid": "c0378789-621a-55bf-9d22-6d29b2bdae68", 00:13:49.889 "is_configured": true, 00:13:49.889 "data_offset": 2048, 00:13:49.889 "data_size": 63488 00:13:49.889 }, 00:13:49.889 { 00:13:49.889 "name": "BaseBdev2", 00:13:49.889 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:13:49.889 "is_configured": true, 00:13:49.889 "data_offset": 2048, 00:13:49.889 "data_size": 63488 00:13:49.889 }, 00:13:49.889 { 00:13:49.889 "name": "BaseBdev3", 00:13:49.889 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:13:49.889 "is_configured": true, 00:13:49.889 "data_offset": 2048, 00:13:49.889 "data_size": 63488 00:13:49.889 } 00:13:49.889 ] 00:13:49.889 }' 00:13:49.889 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.889 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.889 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.148 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.148 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:50.148 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:50.148 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:50.148 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:50.148 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:50.148 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=460 00:13:50.148 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:50.148 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:50.148 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.148 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:50.148 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:50.148 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.148 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.148 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.148 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.148 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.148 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.148 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.148 "name": "raid_bdev1", 00:13:50.148 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:13:50.148 "strip_size_kb": 64, 00:13:50.148 "state": "online", 00:13:50.148 "raid_level": "raid5f", 00:13:50.148 "superblock": true, 00:13:50.148 "num_base_bdevs": 3, 00:13:50.148 "num_base_bdevs_discovered": 3, 00:13:50.148 "num_base_bdevs_operational": 3, 00:13:50.148 "process": { 00:13:50.148 "type": "rebuild", 00:13:50.148 "target": "spare", 00:13:50.148 "progress": { 00:13:50.148 "blocks": 22528, 00:13:50.148 "percent": 17 00:13:50.148 } 00:13:50.148 }, 00:13:50.148 "base_bdevs_list": [ 00:13:50.148 { 00:13:50.148 "name": "spare", 00:13:50.148 "uuid": "c0378789-621a-55bf-9d22-6d29b2bdae68", 00:13:50.148 "is_configured": true, 00:13:50.148 "data_offset": 2048, 00:13:50.149 "data_size": 63488 00:13:50.149 }, 00:13:50.149 { 00:13:50.149 "name": "BaseBdev2", 00:13:50.149 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:13:50.149 "is_configured": true, 00:13:50.149 "data_offset": 2048, 00:13:50.149 "data_size": 63488 00:13:50.149 }, 00:13:50.149 { 00:13:50.149 "name": "BaseBdev3", 00:13:50.149 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:13:50.149 "is_configured": true, 00:13:50.149 "data_offset": 2048, 00:13:50.149 "data_size": 63488 00:13:50.149 } 00:13:50.149 ] 00:13:50.149 }' 00:13:50.149 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.149 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.149 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.149 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.149 16:26:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:51.087 16:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:51.087 16:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:51.087 16:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.087 16:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:51.087 16:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:51.087 16:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.087 16:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.087 16:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.087 16:26:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.087 16:26:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.087 16:26:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.347 16:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.347 "name": "raid_bdev1", 00:13:51.347 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:13:51.347 "strip_size_kb": 64, 00:13:51.347 "state": "online", 00:13:51.347 "raid_level": "raid5f", 00:13:51.347 "superblock": true, 00:13:51.347 "num_base_bdevs": 3, 00:13:51.347 "num_base_bdevs_discovered": 3, 00:13:51.347 "num_base_bdevs_operational": 3, 00:13:51.347 "process": { 00:13:51.347 "type": "rebuild", 00:13:51.347 "target": "spare", 00:13:51.347 "progress": { 00:13:51.347 "blocks": 45056, 00:13:51.347 "percent": 35 00:13:51.347 } 00:13:51.347 }, 00:13:51.347 "base_bdevs_list": [ 00:13:51.347 { 00:13:51.347 "name": "spare", 00:13:51.347 "uuid": "c0378789-621a-55bf-9d22-6d29b2bdae68", 00:13:51.347 "is_configured": true, 00:13:51.347 "data_offset": 2048, 00:13:51.347 "data_size": 63488 00:13:51.347 }, 00:13:51.347 { 00:13:51.347 "name": "BaseBdev2", 00:13:51.347 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:13:51.347 "is_configured": true, 00:13:51.347 "data_offset": 2048, 00:13:51.347 "data_size": 63488 00:13:51.347 }, 00:13:51.347 { 00:13:51.347 "name": "BaseBdev3", 00:13:51.347 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:13:51.347 "is_configured": true, 00:13:51.347 "data_offset": 2048, 00:13:51.347 "data_size": 63488 00:13:51.347 } 00:13:51.347 ] 00:13:51.347 }' 00:13:51.347 16:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.347 16:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:51.347 16:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.347 16:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:51.347 16:26:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:52.284 16:26:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:52.284 16:26:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.284 16:26:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.284 16:26:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.284 16:26:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.284 16:26:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.284 16:26:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.284 16:26:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.284 16:26:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.284 16:26:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.284 16:26:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.284 16:26:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.284 "name": "raid_bdev1", 00:13:52.284 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:13:52.284 "strip_size_kb": 64, 00:13:52.284 "state": "online", 00:13:52.284 "raid_level": "raid5f", 00:13:52.284 "superblock": true, 00:13:52.284 "num_base_bdevs": 3, 00:13:52.284 "num_base_bdevs_discovered": 3, 00:13:52.284 "num_base_bdevs_operational": 3, 00:13:52.284 "process": { 00:13:52.284 "type": "rebuild", 00:13:52.284 "target": "spare", 00:13:52.285 "progress": { 00:13:52.285 "blocks": 69632, 00:13:52.285 "percent": 54 00:13:52.285 } 00:13:52.285 }, 00:13:52.285 "base_bdevs_list": [ 00:13:52.285 { 00:13:52.285 "name": "spare", 00:13:52.285 "uuid": "c0378789-621a-55bf-9d22-6d29b2bdae68", 00:13:52.285 "is_configured": true, 00:13:52.285 "data_offset": 2048, 00:13:52.285 "data_size": 63488 00:13:52.285 }, 00:13:52.285 { 00:13:52.285 "name": "BaseBdev2", 00:13:52.285 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:13:52.285 "is_configured": true, 00:13:52.285 "data_offset": 2048, 00:13:52.285 "data_size": 63488 00:13:52.285 }, 00:13:52.285 { 00:13:52.285 "name": "BaseBdev3", 00:13:52.285 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:13:52.285 "is_configured": true, 00:13:52.285 "data_offset": 2048, 00:13:52.285 "data_size": 63488 00:13:52.285 } 00:13:52.285 ] 00:13:52.285 }' 00:13:52.285 16:26:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.544 16:26:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.544 16:26:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.544 16:26:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.544 16:26:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:53.481 16:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:53.481 16:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.481 16:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.481 16:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.481 16:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.482 16:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.482 16:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.482 16:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.482 16:26:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.482 16:26:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.482 16:26:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.482 16:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.482 "name": "raid_bdev1", 00:13:53.482 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:13:53.482 "strip_size_kb": 64, 00:13:53.482 "state": "online", 00:13:53.482 "raid_level": "raid5f", 00:13:53.482 "superblock": true, 00:13:53.482 "num_base_bdevs": 3, 00:13:53.482 "num_base_bdevs_discovered": 3, 00:13:53.482 "num_base_bdevs_operational": 3, 00:13:53.482 "process": { 00:13:53.482 "type": "rebuild", 00:13:53.482 "target": "spare", 00:13:53.482 "progress": { 00:13:53.482 "blocks": 94208, 00:13:53.482 "percent": 74 00:13:53.482 } 00:13:53.482 }, 00:13:53.482 "base_bdevs_list": [ 00:13:53.482 { 00:13:53.482 "name": "spare", 00:13:53.482 "uuid": "c0378789-621a-55bf-9d22-6d29b2bdae68", 00:13:53.482 "is_configured": true, 00:13:53.482 "data_offset": 2048, 00:13:53.482 "data_size": 63488 00:13:53.482 }, 00:13:53.482 { 00:13:53.482 "name": "BaseBdev2", 00:13:53.482 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:13:53.482 "is_configured": true, 00:13:53.482 "data_offset": 2048, 00:13:53.482 "data_size": 63488 00:13:53.482 }, 00:13:53.482 { 00:13:53.482 "name": "BaseBdev3", 00:13:53.482 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:13:53.482 "is_configured": true, 00:13:53.482 "data_offset": 2048, 00:13:53.482 "data_size": 63488 00:13:53.482 } 00:13:53.482 ] 00:13:53.482 }' 00:13:53.482 16:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.482 16:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.482 16:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.741 16:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.741 16:26:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:54.678 16:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:54.678 16:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.678 16:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.678 16:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.678 16:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.678 16:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.678 16:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.678 16:26:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.678 16:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.679 16:26:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.679 16:26:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.679 16:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.679 "name": "raid_bdev1", 00:13:54.679 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:13:54.679 "strip_size_kb": 64, 00:13:54.679 "state": "online", 00:13:54.679 "raid_level": "raid5f", 00:13:54.679 "superblock": true, 00:13:54.679 "num_base_bdevs": 3, 00:13:54.679 "num_base_bdevs_discovered": 3, 00:13:54.679 "num_base_bdevs_operational": 3, 00:13:54.679 "process": { 00:13:54.679 "type": "rebuild", 00:13:54.679 "target": "spare", 00:13:54.679 "progress": { 00:13:54.679 "blocks": 116736, 00:13:54.679 "percent": 91 00:13:54.679 } 00:13:54.679 }, 00:13:54.679 "base_bdevs_list": [ 00:13:54.679 { 00:13:54.679 "name": "spare", 00:13:54.679 "uuid": "c0378789-621a-55bf-9d22-6d29b2bdae68", 00:13:54.679 "is_configured": true, 00:13:54.679 "data_offset": 2048, 00:13:54.679 "data_size": 63488 00:13:54.679 }, 00:13:54.679 { 00:13:54.679 "name": "BaseBdev2", 00:13:54.679 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:13:54.679 "is_configured": true, 00:13:54.679 "data_offset": 2048, 00:13:54.679 "data_size": 63488 00:13:54.679 }, 00:13:54.679 { 00:13:54.679 "name": "BaseBdev3", 00:13:54.679 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:13:54.679 "is_configured": true, 00:13:54.679 "data_offset": 2048, 00:13:54.679 "data_size": 63488 00:13:54.679 } 00:13:54.679 ] 00:13:54.679 }' 00:13:54.679 16:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.679 16:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.679 16:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.679 16:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.679 16:26:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:55.247 [2024-11-28 16:26:46.747658] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:55.247 [2024-11-28 16:26:46.747736] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:55.247 [2024-11-28 16:26:46.747865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.816 "name": "raid_bdev1", 00:13:55.816 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:13:55.816 "strip_size_kb": 64, 00:13:55.816 "state": "online", 00:13:55.816 "raid_level": "raid5f", 00:13:55.816 "superblock": true, 00:13:55.816 "num_base_bdevs": 3, 00:13:55.816 "num_base_bdevs_discovered": 3, 00:13:55.816 "num_base_bdevs_operational": 3, 00:13:55.816 "base_bdevs_list": [ 00:13:55.816 { 00:13:55.816 "name": "spare", 00:13:55.816 "uuid": "c0378789-621a-55bf-9d22-6d29b2bdae68", 00:13:55.816 "is_configured": true, 00:13:55.816 "data_offset": 2048, 00:13:55.816 "data_size": 63488 00:13:55.816 }, 00:13:55.816 { 00:13:55.816 "name": "BaseBdev2", 00:13:55.816 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:13:55.816 "is_configured": true, 00:13:55.816 "data_offset": 2048, 00:13:55.816 "data_size": 63488 00:13:55.816 }, 00:13:55.816 { 00:13:55.816 "name": "BaseBdev3", 00:13:55.816 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:13:55.816 "is_configured": true, 00:13:55.816 "data_offset": 2048, 00:13:55.816 "data_size": 63488 00:13:55.816 } 00:13:55.816 ] 00:13:55.816 }' 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.816 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.076 "name": "raid_bdev1", 00:13:56.076 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:13:56.076 "strip_size_kb": 64, 00:13:56.076 "state": "online", 00:13:56.076 "raid_level": "raid5f", 00:13:56.076 "superblock": true, 00:13:56.076 "num_base_bdevs": 3, 00:13:56.076 "num_base_bdevs_discovered": 3, 00:13:56.076 "num_base_bdevs_operational": 3, 00:13:56.076 "base_bdevs_list": [ 00:13:56.076 { 00:13:56.076 "name": "spare", 00:13:56.076 "uuid": "c0378789-621a-55bf-9d22-6d29b2bdae68", 00:13:56.076 "is_configured": true, 00:13:56.076 "data_offset": 2048, 00:13:56.076 "data_size": 63488 00:13:56.076 }, 00:13:56.076 { 00:13:56.076 "name": "BaseBdev2", 00:13:56.076 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:13:56.076 "is_configured": true, 00:13:56.076 "data_offset": 2048, 00:13:56.076 "data_size": 63488 00:13:56.076 }, 00:13:56.076 { 00:13:56.076 "name": "BaseBdev3", 00:13:56.076 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:13:56.076 "is_configured": true, 00:13:56.076 "data_offset": 2048, 00:13:56.076 "data_size": 63488 00:13:56.076 } 00:13:56.076 ] 00:13:56.076 }' 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.076 "name": "raid_bdev1", 00:13:56.076 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:13:56.076 "strip_size_kb": 64, 00:13:56.076 "state": "online", 00:13:56.076 "raid_level": "raid5f", 00:13:56.076 "superblock": true, 00:13:56.076 "num_base_bdevs": 3, 00:13:56.076 "num_base_bdevs_discovered": 3, 00:13:56.076 "num_base_bdevs_operational": 3, 00:13:56.076 "base_bdevs_list": [ 00:13:56.076 { 00:13:56.076 "name": "spare", 00:13:56.076 "uuid": "c0378789-621a-55bf-9d22-6d29b2bdae68", 00:13:56.076 "is_configured": true, 00:13:56.076 "data_offset": 2048, 00:13:56.076 "data_size": 63488 00:13:56.076 }, 00:13:56.076 { 00:13:56.076 "name": "BaseBdev2", 00:13:56.076 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:13:56.076 "is_configured": true, 00:13:56.076 "data_offset": 2048, 00:13:56.076 "data_size": 63488 00:13:56.076 }, 00:13:56.076 { 00:13:56.076 "name": "BaseBdev3", 00:13:56.076 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:13:56.076 "is_configured": true, 00:13:56.076 "data_offset": 2048, 00:13:56.076 "data_size": 63488 00:13:56.076 } 00:13:56.076 ] 00:13:56.076 }' 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.076 16:26:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.645 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:56.645 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.645 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.645 [2024-11-28 16:26:48.158085] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:56.645 [2024-11-28 16:26:48.158169] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:56.645 [2024-11-28 16:26:48.158280] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.645 [2024-11-28 16:26:48.158387] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:56.645 [2024-11-28 16:26:48.158462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:56.645 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.645 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.646 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:56.646 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.646 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.646 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.646 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:56.646 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:56.646 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:56.646 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:56.646 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:56.646 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:56.646 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:56.646 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:56.646 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:56.646 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:56.646 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:56.646 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:56.646 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:56.646 /dev/nbd0 00:13:56.906 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:56.906 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:56.906 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:56.906 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:56.906 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:56.906 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:56.906 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:56.906 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:56.906 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:56.906 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:56.906 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.906 1+0 records in 00:13:56.906 1+0 records out 00:13:56.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596285 s, 6.9 MB/s 00:13:56.906 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.906 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:56.906 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.906 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:56.906 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:56.906 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:56.906 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:56.906 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:56.906 /dev/nbd1 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.166 1+0 records in 00:13:57.166 1+0 records out 00:13:57.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402032 s, 10.2 MB/s 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.166 16:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:57.426 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:57.426 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:57.426 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:57.426 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.426 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.426 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:57.426 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:57.426 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.426 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.426 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.686 [2024-11-28 16:26:49.252238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:57.686 [2024-11-28 16:26:49.252305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.686 [2024-11-28 16:26:49.252331] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:57.686 [2024-11-28 16:26:49.252341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.686 [2024-11-28 16:26:49.254813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.686 [2024-11-28 16:26:49.254923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:57.686 [2024-11-28 16:26:49.255025] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:57.686 [2024-11-28 16:26:49.255072] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:57.686 [2024-11-28 16:26:49.255203] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:57.686 [2024-11-28 16:26:49.255308] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.686 spare 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.686 [2024-11-28 16:26:49.355201] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:13:57.686 [2024-11-28 16:26:49.355225] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:57.686 [2024-11-28 16:26:49.355490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047560 00:13:57.686 [2024-11-28 16:26:49.355959] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:13:57.686 [2024-11-28 16:26:49.355990] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:13:57.686 [2024-11-28 16:26:49.356140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.686 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.686 "name": "raid_bdev1", 00:13:57.686 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:13:57.686 "strip_size_kb": 64, 00:13:57.686 "state": "online", 00:13:57.686 "raid_level": "raid5f", 00:13:57.686 "superblock": true, 00:13:57.686 "num_base_bdevs": 3, 00:13:57.686 "num_base_bdevs_discovered": 3, 00:13:57.686 "num_base_bdevs_operational": 3, 00:13:57.686 "base_bdevs_list": [ 00:13:57.686 { 00:13:57.686 "name": "spare", 00:13:57.686 "uuid": "c0378789-621a-55bf-9d22-6d29b2bdae68", 00:13:57.686 "is_configured": true, 00:13:57.686 "data_offset": 2048, 00:13:57.686 "data_size": 63488 00:13:57.686 }, 00:13:57.687 { 00:13:57.687 "name": "BaseBdev2", 00:13:57.687 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:13:57.687 "is_configured": true, 00:13:57.687 "data_offset": 2048, 00:13:57.687 "data_size": 63488 00:13:57.687 }, 00:13:57.687 { 00:13:57.687 "name": "BaseBdev3", 00:13:57.687 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:13:57.687 "is_configured": true, 00:13:57.687 "data_offset": 2048, 00:13:57.687 "data_size": 63488 00:13:57.687 } 00:13:57.687 ] 00:13:57.687 }' 00:13:57.687 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.687 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.256 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:58.256 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.256 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:58.256 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:58.256 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.256 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.256 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.256 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.256 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.256 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.256 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.256 "name": "raid_bdev1", 00:13:58.256 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:13:58.256 "strip_size_kb": 64, 00:13:58.256 "state": "online", 00:13:58.256 "raid_level": "raid5f", 00:13:58.256 "superblock": true, 00:13:58.256 "num_base_bdevs": 3, 00:13:58.256 "num_base_bdevs_discovered": 3, 00:13:58.256 "num_base_bdevs_operational": 3, 00:13:58.256 "base_bdevs_list": [ 00:13:58.256 { 00:13:58.256 "name": "spare", 00:13:58.256 "uuid": "c0378789-621a-55bf-9d22-6d29b2bdae68", 00:13:58.256 "is_configured": true, 00:13:58.256 "data_offset": 2048, 00:13:58.256 "data_size": 63488 00:13:58.256 }, 00:13:58.256 { 00:13:58.256 "name": "BaseBdev2", 00:13:58.256 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:13:58.256 "is_configured": true, 00:13:58.256 "data_offset": 2048, 00:13:58.256 "data_size": 63488 00:13:58.256 }, 00:13:58.256 { 00:13:58.256 "name": "BaseBdev3", 00:13:58.256 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:13:58.256 "is_configured": true, 00:13:58.256 "data_offset": 2048, 00:13:58.256 "data_size": 63488 00:13:58.256 } 00:13:58.256 ] 00:13:58.256 }' 00:13:58.256 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.256 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:58.256 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.256 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:58.256 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.256 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:58.256 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.256 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.256 16:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.256 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.256 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:58.256 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.256 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.516 [2024-11-28 16:26:50.027237] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:58.516 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.516 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:58.516 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.516 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.516 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:58.516 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.516 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:58.516 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.516 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.516 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.516 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.517 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.517 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.517 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.517 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.517 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.517 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.517 "name": "raid_bdev1", 00:13:58.517 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:13:58.517 "strip_size_kb": 64, 00:13:58.517 "state": "online", 00:13:58.517 "raid_level": "raid5f", 00:13:58.517 "superblock": true, 00:13:58.517 "num_base_bdevs": 3, 00:13:58.517 "num_base_bdevs_discovered": 2, 00:13:58.517 "num_base_bdevs_operational": 2, 00:13:58.517 "base_bdevs_list": [ 00:13:58.517 { 00:13:58.517 "name": null, 00:13:58.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.517 "is_configured": false, 00:13:58.517 "data_offset": 0, 00:13:58.517 "data_size": 63488 00:13:58.517 }, 00:13:58.517 { 00:13:58.517 "name": "BaseBdev2", 00:13:58.517 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:13:58.517 "is_configured": true, 00:13:58.517 "data_offset": 2048, 00:13:58.517 "data_size": 63488 00:13:58.517 }, 00:13:58.517 { 00:13:58.517 "name": "BaseBdev3", 00:13:58.517 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:13:58.517 "is_configured": true, 00:13:58.517 "data_offset": 2048, 00:13:58.517 "data_size": 63488 00:13:58.517 } 00:13:58.517 ] 00:13:58.517 }' 00:13:58.517 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.517 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.776 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:58.776 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.776 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.776 [2024-11-28 16:26:50.462505] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:58.776 [2024-11-28 16:26:50.462701] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:58.776 [2024-11-28 16:26:50.462758] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:58.776 [2024-11-28 16:26:50.462813] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:58.776 [2024-11-28 16:26:50.469136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047630 00:13:58.776 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.776 16:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:58.776 [2024-11-28 16:26:50.471441] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:59.714 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.714 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.714 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.714 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.714 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.714 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.714 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.714 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.714 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.974 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.974 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.974 "name": "raid_bdev1", 00:13:59.974 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:13:59.974 "strip_size_kb": 64, 00:13:59.974 "state": "online", 00:13:59.974 "raid_level": "raid5f", 00:13:59.974 "superblock": true, 00:13:59.974 "num_base_bdevs": 3, 00:13:59.974 "num_base_bdevs_discovered": 3, 00:13:59.974 "num_base_bdevs_operational": 3, 00:13:59.974 "process": { 00:13:59.974 "type": "rebuild", 00:13:59.974 "target": "spare", 00:13:59.974 "progress": { 00:13:59.974 "blocks": 20480, 00:13:59.974 "percent": 16 00:13:59.974 } 00:13:59.974 }, 00:13:59.974 "base_bdevs_list": [ 00:13:59.974 { 00:13:59.974 "name": "spare", 00:13:59.974 "uuid": "c0378789-621a-55bf-9d22-6d29b2bdae68", 00:13:59.974 "is_configured": true, 00:13:59.974 "data_offset": 2048, 00:13:59.974 "data_size": 63488 00:13:59.974 }, 00:13:59.974 { 00:13:59.974 "name": "BaseBdev2", 00:13:59.974 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:13:59.974 "is_configured": true, 00:13:59.974 "data_offset": 2048, 00:13:59.974 "data_size": 63488 00:13:59.974 }, 00:13:59.974 { 00:13:59.974 "name": "BaseBdev3", 00:13:59.974 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:13:59.974 "is_configured": true, 00:13:59.974 "data_offset": 2048, 00:13:59.975 "data_size": 63488 00:13:59.975 } 00:13:59.975 ] 00:13:59.975 }' 00:13:59.975 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.975 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:59.975 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.975 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.975 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:59.975 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.975 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.975 [2024-11-28 16:26:51.611212] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:59.975 [2024-11-28 16:26:51.679284] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:59.975 [2024-11-28 16:26:51.679379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.975 [2024-11-28 16:26:51.679400] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:59.975 [2024-11-28 16:26:51.679408] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:59.975 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.975 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:59.975 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.975 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.975 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:59.975 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.975 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:59.975 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.975 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.975 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.975 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.975 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.975 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.975 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.975 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.975 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.234 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.234 "name": "raid_bdev1", 00:14:00.234 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:14:00.234 "strip_size_kb": 64, 00:14:00.234 "state": "online", 00:14:00.234 "raid_level": "raid5f", 00:14:00.234 "superblock": true, 00:14:00.234 "num_base_bdevs": 3, 00:14:00.234 "num_base_bdevs_discovered": 2, 00:14:00.234 "num_base_bdevs_operational": 2, 00:14:00.234 "base_bdevs_list": [ 00:14:00.234 { 00:14:00.234 "name": null, 00:14:00.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.234 "is_configured": false, 00:14:00.234 "data_offset": 0, 00:14:00.234 "data_size": 63488 00:14:00.234 }, 00:14:00.234 { 00:14:00.234 "name": "BaseBdev2", 00:14:00.234 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:14:00.234 "is_configured": true, 00:14:00.234 "data_offset": 2048, 00:14:00.234 "data_size": 63488 00:14:00.234 }, 00:14:00.234 { 00:14:00.234 "name": "BaseBdev3", 00:14:00.234 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:14:00.234 "is_configured": true, 00:14:00.234 "data_offset": 2048, 00:14:00.234 "data_size": 63488 00:14:00.234 } 00:14:00.234 ] 00:14:00.234 }' 00:14:00.234 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.234 16:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.493 16:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:00.493 16:26:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.493 16:26:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.493 [2024-11-28 16:26:52.126920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:00.493 [2024-11-28 16:26:52.127025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.493 [2024-11-28 16:26:52.127066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:14:00.493 [2024-11-28 16:26:52.127095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.493 [2024-11-28 16:26:52.127615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.493 [2024-11-28 16:26:52.127670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:00.493 [2024-11-28 16:26:52.127783] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:00.493 [2024-11-28 16:26:52.127838] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:00.493 [2024-11-28 16:26:52.127885] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:00.493 [2024-11-28 16:26:52.127954] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:00.493 [2024-11-28 16:26:52.132523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:14:00.493 spare 00:14:00.493 16:26:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.493 16:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:00.493 [2024-11-28 16:26:52.134973] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:01.476 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.476 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.476 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.476 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.476 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.476 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.476 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.476 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.476 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.476 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.476 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.476 "name": "raid_bdev1", 00:14:01.476 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:14:01.476 "strip_size_kb": 64, 00:14:01.476 "state": "online", 00:14:01.476 "raid_level": "raid5f", 00:14:01.476 "superblock": true, 00:14:01.476 "num_base_bdevs": 3, 00:14:01.476 "num_base_bdevs_discovered": 3, 00:14:01.476 "num_base_bdevs_operational": 3, 00:14:01.476 "process": { 00:14:01.476 "type": "rebuild", 00:14:01.476 "target": "spare", 00:14:01.476 "progress": { 00:14:01.476 "blocks": 20480, 00:14:01.476 "percent": 16 00:14:01.476 } 00:14:01.476 }, 00:14:01.476 "base_bdevs_list": [ 00:14:01.476 { 00:14:01.476 "name": "spare", 00:14:01.476 "uuid": "c0378789-621a-55bf-9d22-6d29b2bdae68", 00:14:01.476 "is_configured": true, 00:14:01.476 "data_offset": 2048, 00:14:01.476 "data_size": 63488 00:14:01.476 }, 00:14:01.476 { 00:14:01.476 "name": "BaseBdev2", 00:14:01.476 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:14:01.476 "is_configured": true, 00:14:01.476 "data_offset": 2048, 00:14:01.476 "data_size": 63488 00:14:01.476 }, 00:14:01.476 { 00:14:01.476 "name": "BaseBdev3", 00:14:01.476 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:14:01.476 "is_configured": true, 00:14:01.476 "data_offset": 2048, 00:14:01.476 "data_size": 63488 00:14:01.476 } 00:14:01.476 ] 00:14:01.476 }' 00:14:01.476 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.476 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.747 [2024-11-28 16:26:53.274229] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:01.747 [2024-11-28 16:26:53.342888] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:01.747 [2024-11-28 16:26:53.342941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.747 [2024-11-28 16:26:53.342956] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:01.747 [2024-11-28 16:26:53.342969] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.747 "name": "raid_bdev1", 00:14:01.747 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:14:01.747 "strip_size_kb": 64, 00:14:01.747 "state": "online", 00:14:01.747 "raid_level": "raid5f", 00:14:01.747 "superblock": true, 00:14:01.747 "num_base_bdevs": 3, 00:14:01.747 "num_base_bdevs_discovered": 2, 00:14:01.747 "num_base_bdevs_operational": 2, 00:14:01.747 "base_bdevs_list": [ 00:14:01.747 { 00:14:01.747 "name": null, 00:14:01.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.747 "is_configured": false, 00:14:01.747 "data_offset": 0, 00:14:01.747 "data_size": 63488 00:14:01.747 }, 00:14:01.747 { 00:14:01.747 "name": "BaseBdev2", 00:14:01.747 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:14:01.747 "is_configured": true, 00:14:01.747 "data_offset": 2048, 00:14:01.747 "data_size": 63488 00:14:01.747 }, 00:14:01.747 { 00:14:01.747 "name": "BaseBdev3", 00:14:01.747 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:14:01.747 "is_configured": true, 00:14:01.747 "data_offset": 2048, 00:14:01.747 "data_size": 63488 00:14:01.747 } 00:14:01.747 ] 00:14:01.747 }' 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.747 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.006 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.006 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.006 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.006 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.006 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.006 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.006 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.006 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.006 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.264 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.264 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.264 "name": "raid_bdev1", 00:14:02.264 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:14:02.264 "strip_size_kb": 64, 00:14:02.264 "state": "online", 00:14:02.264 "raid_level": "raid5f", 00:14:02.264 "superblock": true, 00:14:02.264 "num_base_bdevs": 3, 00:14:02.264 "num_base_bdevs_discovered": 2, 00:14:02.264 "num_base_bdevs_operational": 2, 00:14:02.264 "base_bdevs_list": [ 00:14:02.264 { 00:14:02.264 "name": null, 00:14:02.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.264 "is_configured": false, 00:14:02.264 "data_offset": 0, 00:14:02.264 "data_size": 63488 00:14:02.264 }, 00:14:02.264 { 00:14:02.264 "name": "BaseBdev2", 00:14:02.264 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:14:02.264 "is_configured": true, 00:14:02.264 "data_offset": 2048, 00:14:02.264 "data_size": 63488 00:14:02.264 }, 00:14:02.264 { 00:14:02.264 "name": "BaseBdev3", 00:14:02.264 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:14:02.264 "is_configured": true, 00:14:02.264 "data_offset": 2048, 00:14:02.264 "data_size": 63488 00:14:02.264 } 00:14:02.264 ] 00:14:02.264 }' 00:14:02.264 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.264 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:02.264 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.264 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.264 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:02.264 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.264 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.264 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.264 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:02.264 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.264 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.264 [2024-11-28 16:26:53.910408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:02.264 [2024-11-28 16:26:53.910463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.264 [2024-11-28 16:26:53.910485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:02.264 [2024-11-28 16:26:53.910497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.264 [2024-11-28 16:26:53.910938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.265 [2024-11-28 16:26:53.910959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:02.265 [2024-11-28 16:26:53.911031] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:02.265 [2024-11-28 16:26:53.911047] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:02.265 [2024-11-28 16:26:53.911055] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:02.265 [2024-11-28 16:26:53.911068] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:02.265 BaseBdev1 00:14:02.265 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.265 16:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:03.201 16:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:03.201 16:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.201 16:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.201 16:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:03.201 16:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.201 16:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.201 16:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.201 16:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.201 16:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.201 16:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.201 16:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.201 16:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.201 16:26:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.201 16:26:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.201 16:26:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.461 16:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.461 "name": "raid_bdev1", 00:14:03.461 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:14:03.461 "strip_size_kb": 64, 00:14:03.461 "state": "online", 00:14:03.461 "raid_level": "raid5f", 00:14:03.461 "superblock": true, 00:14:03.461 "num_base_bdevs": 3, 00:14:03.461 "num_base_bdevs_discovered": 2, 00:14:03.461 "num_base_bdevs_operational": 2, 00:14:03.461 "base_bdevs_list": [ 00:14:03.461 { 00:14:03.461 "name": null, 00:14:03.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.461 "is_configured": false, 00:14:03.461 "data_offset": 0, 00:14:03.461 "data_size": 63488 00:14:03.461 }, 00:14:03.461 { 00:14:03.461 "name": "BaseBdev2", 00:14:03.461 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:14:03.461 "is_configured": true, 00:14:03.461 "data_offset": 2048, 00:14:03.461 "data_size": 63488 00:14:03.461 }, 00:14:03.461 { 00:14:03.461 "name": "BaseBdev3", 00:14:03.461 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:14:03.461 "is_configured": true, 00:14:03.461 "data_offset": 2048, 00:14:03.461 "data_size": 63488 00:14:03.461 } 00:14:03.461 ] 00:14:03.461 }' 00:14:03.461 16:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.461 16:26:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.719 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:03.719 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.719 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:03.719 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:03.719 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.719 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.719 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.719 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.719 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.719 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.719 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.719 "name": "raid_bdev1", 00:14:03.719 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:14:03.719 "strip_size_kb": 64, 00:14:03.719 "state": "online", 00:14:03.719 "raid_level": "raid5f", 00:14:03.719 "superblock": true, 00:14:03.719 "num_base_bdevs": 3, 00:14:03.719 "num_base_bdevs_discovered": 2, 00:14:03.719 "num_base_bdevs_operational": 2, 00:14:03.719 "base_bdevs_list": [ 00:14:03.719 { 00:14:03.719 "name": null, 00:14:03.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.719 "is_configured": false, 00:14:03.719 "data_offset": 0, 00:14:03.719 "data_size": 63488 00:14:03.719 }, 00:14:03.719 { 00:14:03.719 "name": "BaseBdev2", 00:14:03.720 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:14:03.720 "is_configured": true, 00:14:03.720 "data_offset": 2048, 00:14:03.720 "data_size": 63488 00:14:03.720 }, 00:14:03.720 { 00:14:03.720 "name": "BaseBdev3", 00:14:03.720 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:14:03.720 "is_configured": true, 00:14:03.720 "data_offset": 2048, 00:14:03.720 "data_size": 63488 00:14:03.720 } 00:14:03.720 ] 00:14:03.720 }' 00:14:03.720 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.720 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:03.720 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.720 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:03.720 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:03.720 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:03.720 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:03.720 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:03.720 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.720 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:03.720 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.720 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:03.720 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.720 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.720 [2024-11-28 16:26:55.455980] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:03.720 [2024-11-28 16:26:55.456178] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:03.720 [2024-11-28 16:26:55.456197] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:03.720 request: 00:14:03.720 { 00:14:03.720 "base_bdev": "BaseBdev1", 00:14:03.720 "raid_bdev": "raid_bdev1", 00:14:03.720 "method": "bdev_raid_add_base_bdev", 00:14:03.720 "req_id": 1 00:14:03.720 } 00:14:03.720 Got JSON-RPC error response 00:14:03.720 response: 00:14:03.720 { 00:14:03.720 "code": -22, 00:14:03.720 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:03.720 } 00:14:03.720 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:03.720 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:03.720 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:03.720 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:03.720 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:03.720 16:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:05.099 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:05.099 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.099 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.099 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:05.099 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.099 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:05.099 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.099 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.099 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.099 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.099 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.099 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.099 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.099 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.099 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.099 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.099 "name": "raid_bdev1", 00:14:05.099 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:14:05.099 "strip_size_kb": 64, 00:14:05.099 "state": "online", 00:14:05.099 "raid_level": "raid5f", 00:14:05.099 "superblock": true, 00:14:05.099 "num_base_bdevs": 3, 00:14:05.099 "num_base_bdevs_discovered": 2, 00:14:05.099 "num_base_bdevs_operational": 2, 00:14:05.099 "base_bdevs_list": [ 00:14:05.099 { 00:14:05.099 "name": null, 00:14:05.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.099 "is_configured": false, 00:14:05.099 "data_offset": 0, 00:14:05.099 "data_size": 63488 00:14:05.099 }, 00:14:05.099 { 00:14:05.099 "name": "BaseBdev2", 00:14:05.099 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:14:05.099 "is_configured": true, 00:14:05.100 "data_offset": 2048, 00:14:05.100 "data_size": 63488 00:14:05.100 }, 00:14:05.100 { 00:14:05.100 "name": "BaseBdev3", 00:14:05.100 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:14:05.100 "is_configured": true, 00:14:05.100 "data_offset": 2048, 00:14:05.100 "data_size": 63488 00:14:05.100 } 00:14:05.100 ] 00:14:05.100 }' 00:14:05.100 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.100 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.360 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:05.360 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.360 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:05.360 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:05.360 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.360 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.360 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.360 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.360 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.360 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.360 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.360 "name": "raid_bdev1", 00:14:05.360 "uuid": "0eee5150-4111-4a4a-9be0-ab407bba4fd9", 00:14:05.360 "strip_size_kb": 64, 00:14:05.360 "state": "online", 00:14:05.360 "raid_level": "raid5f", 00:14:05.360 "superblock": true, 00:14:05.360 "num_base_bdevs": 3, 00:14:05.360 "num_base_bdevs_discovered": 2, 00:14:05.360 "num_base_bdevs_operational": 2, 00:14:05.360 "base_bdevs_list": [ 00:14:05.360 { 00:14:05.360 "name": null, 00:14:05.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.360 "is_configured": false, 00:14:05.360 "data_offset": 0, 00:14:05.360 "data_size": 63488 00:14:05.360 }, 00:14:05.360 { 00:14:05.360 "name": "BaseBdev2", 00:14:05.360 "uuid": "77ffa143-29f6-5cae-ac86-5f0568c57c2e", 00:14:05.360 "is_configured": true, 00:14:05.360 "data_offset": 2048, 00:14:05.360 "data_size": 63488 00:14:05.360 }, 00:14:05.360 { 00:14:05.360 "name": "BaseBdev3", 00:14:05.360 "uuid": "2381578b-828f-5c38-ada6-84859c0c06d1", 00:14:05.360 "is_configured": true, 00:14:05.360 "data_offset": 2048, 00:14:05.360 "data_size": 63488 00:14:05.360 } 00:14:05.360 ] 00:14:05.360 }' 00:14:05.360 16:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.360 16:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:05.360 16:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.360 16:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:05.360 16:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 92520 00:14:05.360 16:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 92520 ']' 00:14:05.360 16:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 92520 00:14:05.360 16:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:05.360 16:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:05.360 16:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92520 00:14:05.360 16:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:05.360 killing process with pid 92520 00:14:05.360 Received shutdown signal, test time was about 60.000000 seconds 00:14:05.360 00:14:05.360 Latency(us) 00:14:05.360 [2024-11-28T16:26:57.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.360 [2024-11-28T16:26:57.131Z] =================================================================================================================== 00:14:05.360 [2024-11-28T16:26:57.131Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:05.360 16:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:05.360 16:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92520' 00:14:05.360 16:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 92520 00:14:05.360 [2024-11-28 16:26:57.085394] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:05.360 [2024-11-28 16:26:57.085510] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:05.360 16:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 92520 00:14:05.360 [2024-11-28 16:26:57.085576] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:05.360 [2024-11-28 16:26:57.085586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:14:05.621 [2024-11-28 16:26:57.162189] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:05.882 16:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:05.882 00:14:05.882 real 0m21.709s 00:14:05.882 user 0m27.954s 00:14:05.882 sys 0m2.904s 00:14:05.882 16:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:05.882 16:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.882 ************************************ 00:14:05.882 END TEST raid5f_rebuild_test_sb 00:14:05.882 ************************************ 00:14:05.882 16:26:57 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:05.882 16:26:57 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:14:05.882 16:26:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:05.882 16:26:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:05.882 16:26:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:05.882 ************************************ 00:14:05.882 START TEST raid5f_state_function_test 00:14:05.882 ************************************ 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=93250 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:05.882 Process raid pid: 93250 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93250' 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 93250 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 93250 ']' 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:05.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:05.882 16:26:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.142 [2024-11-28 16:26:57.713519] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:06.142 [2024-11-28 16:26:57.713660] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.142 [2024-11-28 16:26:57.882416] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.401 [2024-11-28 16:26:57.955250] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.401 [2024-11-28 16:26:58.031393] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.401 [2024-11-28 16:26:58.031433] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.970 [2024-11-28 16:26:58.531022] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:06.970 [2024-11-28 16:26:58.531072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:06.970 [2024-11-28 16:26:58.531085] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:06.970 [2024-11-28 16:26:58.531095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:06.970 [2024-11-28 16:26:58.531101] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:06.970 [2024-11-28 16:26:58.531113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:06.970 [2024-11-28 16:26:58.531119] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:06.970 [2024-11-28 16:26:58.531129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.970 "name": "Existed_Raid", 00:14:06.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.970 "strip_size_kb": 64, 00:14:06.970 "state": "configuring", 00:14:06.970 "raid_level": "raid5f", 00:14:06.970 "superblock": false, 00:14:06.970 "num_base_bdevs": 4, 00:14:06.970 "num_base_bdevs_discovered": 0, 00:14:06.970 "num_base_bdevs_operational": 4, 00:14:06.970 "base_bdevs_list": [ 00:14:06.970 { 00:14:06.970 "name": "BaseBdev1", 00:14:06.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.970 "is_configured": false, 00:14:06.970 "data_offset": 0, 00:14:06.970 "data_size": 0 00:14:06.970 }, 00:14:06.970 { 00:14:06.970 "name": "BaseBdev2", 00:14:06.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.970 "is_configured": false, 00:14:06.970 "data_offset": 0, 00:14:06.970 "data_size": 0 00:14:06.970 }, 00:14:06.970 { 00:14:06.970 "name": "BaseBdev3", 00:14:06.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.970 "is_configured": false, 00:14:06.970 "data_offset": 0, 00:14:06.970 "data_size": 0 00:14:06.970 }, 00:14:06.970 { 00:14:06.970 "name": "BaseBdev4", 00:14:06.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.970 "is_configured": false, 00:14:06.970 "data_offset": 0, 00:14:06.970 "data_size": 0 00:14:06.970 } 00:14:06.970 ] 00:14:06.970 }' 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.970 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.230 [2024-11-28 16:26:58.922199] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:07.230 [2024-11-28 16:26:58.922250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.230 [2024-11-28 16:26:58.934227] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:07.230 [2024-11-28 16:26:58.934265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:07.230 [2024-11-28 16:26:58.934273] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:07.230 [2024-11-28 16:26:58.934282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:07.230 [2024-11-28 16:26:58.934288] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:07.230 [2024-11-28 16:26:58.934297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:07.230 [2024-11-28 16:26:58.934302] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:07.230 [2024-11-28 16:26:58.934311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.230 [2024-11-28 16:26:58.961276] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:07.230 BaseBdev1 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.230 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.230 [ 00:14:07.230 { 00:14:07.230 "name": "BaseBdev1", 00:14:07.230 "aliases": [ 00:14:07.230 "a9b0ed4c-03da-4cd8-a568-f89a1aa0fca3" 00:14:07.230 ], 00:14:07.230 "product_name": "Malloc disk", 00:14:07.230 "block_size": 512, 00:14:07.230 "num_blocks": 65536, 00:14:07.230 "uuid": "a9b0ed4c-03da-4cd8-a568-f89a1aa0fca3", 00:14:07.230 "assigned_rate_limits": { 00:14:07.230 "rw_ios_per_sec": 0, 00:14:07.230 "rw_mbytes_per_sec": 0, 00:14:07.230 "r_mbytes_per_sec": 0, 00:14:07.230 "w_mbytes_per_sec": 0 00:14:07.230 }, 00:14:07.230 "claimed": true, 00:14:07.230 "claim_type": "exclusive_write", 00:14:07.230 "zoned": false, 00:14:07.230 "supported_io_types": { 00:14:07.230 "read": true, 00:14:07.230 "write": true, 00:14:07.230 "unmap": true, 00:14:07.230 "flush": true, 00:14:07.230 "reset": true, 00:14:07.230 "nvme_admin": false, 00:14:07.230 "nvme_io": false, 00:14:07.230 "nvme_io_md": false, 00:14:07.230 "write_zeroes": true, 00:14:07.230 "zcopy": true, 00:14:07.230 "get_zone_info": false, 00:14:07.230 "zone_management": false, 00:14:07.230 "zone_append": false, 00:14:07.230 "compare": false, 00:14:07.231 "compare_and_write": false, 00:14:07.231 "abort": true, 00:14:07.231 "seek_hole": false, 00:14:07.231 "seek_data": false, 00:14:07.231 "copy": true, 00:14:07.231 "nvme_iov_md": false 00:14:07.231 }, 00:14:07.231 "memory_domains": [ 00:14:07.231 { 00:14:07.231 "dma_device_id": "system", 00:14:07.231 "dma_device_type": 1 00:14:07.231 }, 00:14:07.231 { 00:14:07.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.231 "dma_device_type": 2 00:14:07.231 } 00:14:07.231 ], 00:14:07.231 "driver_specific": {} 00:14:07.231 } 00:14:07.231 ] 00:14:07.231 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.231 16:26:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:07.231 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:07.231 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.231 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.231 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.231 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.231 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:07.231 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.231 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.231 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.231 16:26:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.490 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.490 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.490 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.490 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.490 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.490 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.490 "name": "Existed_Raid", 00:14:07.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.490 "strip_size_kb": 64, 00:14:07.490 "state": "configuring", 00:14:07.490 "raid_level": "raid5f", 00:14:07.490 "superblock": false, 00:14:07.490 "num_base_bdevs": 4, 00:14:07.490 "num_base_bdevs_discovered": 1, 00:14:07.490 "num_base_bdevs_operational": 4, 00:14:07.490 "base_bdevs_list": [ 00:14:07.490 { 00:14:07.490 "name": "BaseBdev1", 00:14:07.490 "uuid": "a9b0ed4c-03da-4cd8-a568-f89a1aa0fca3", 00:14:07.490 "is_configured": true, 00:14:07.490 "data_offset": 0, 00:14:07.490 "data_size": 65536 00:14:07.490 }, 00:14:07.490 { 00:14:07.490 "name": "BaseBdev2", 00:14:07.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.490 "is_configured": false, 00:14:07.490 "data_offset": 0, 00:14:07.490 "data_size": 0 00:14:07.490 }, 00:14:07.490 { 00:14:07.490 "name": "BaseBdev3", 00:14:07.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.490 "is_configured": false, 00:14:07.490 "data_offset": 0, 00:14:07.490 "data_size": 0 00:14:07.490 }, 00:14:07.490 { 00:14:07.490 "name": "BaseBdev4", 00:14:07.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.490 "is_configured": false, 00:14:07.490 "data_offset": 0, 00:14:07.490 "data_size": 0 00:14:07.490 } 00:14:07.490 ] 00:14:07.490 }' 00:14:07.490 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.490 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.749 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:07.749 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.749 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.749 [2024-11-28 16:26:59.472410] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:07.749 [2024-11-28 16:26:59.472451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:14:07.749 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.749 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:07.749 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.749 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.749 [2024-11-28 16:26:59.484432] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:07.749 [2024-11-28 16:26:59.486516] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:07.749 [2024-11-28 16:26:59.486551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:07.749 [2024-11-28 16:26:59.486560] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:07.749 [2024-11-28 16:26:59.486567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:07.749 [2024-11-28 16:26:59.486573] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:07.749 [2024-11-28 16:26:59.486580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:07.749 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.749 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:07.749 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:07.749 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:07.749 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.749 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.749 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.749 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.749 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:07.749 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.749 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.750 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.750 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.750 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.750 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.750 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.750 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.750 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.009 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.009 "name": "Existed_Raid", 00:14:08.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.009 "strip_size_kb": 64, 00:14:08.009 "state": "configuring", 00:14:08.009 "raid_level": "raid5f", 00:14:08.009 "superblock": false, 00:14:08.009 "num_base_bdevs": 4, 00:14:08.009 "num_base_bdevs_discovered": 1, 00:14:08.009 "num_base_bdevs_operational": 4, 00:14:08.009 "base_bdevs_list": [ 00:14:08.009 { 00:14:08.009 "name": "BaseBdev1", 00:14:08.009 "uuid": "a9b0ed4c-03da-4cd8-a568-f89a1aa0fca3", 00:14:08.009 "is_configured": true, 00:14:08.009 "data_offset": 0, 00:14:08.009 "data_size": 65536 00:14:08.009 }, 00:14:08.009 { 00:14:08.009 "name": "BaseBdev2", 00:14:08.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.009 "is_configured": false, 00:14:08.009 "data_offset": 0, 00:14:08.009 "data_size": 0 00:14:08.009 }, 00:14:08.009 { 00:14:08.009 "name": "BaseBdev3", 00:14:08.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.009 "is_configured": false, 00:14:08.009 "data_offset": 0, 00:14:08.009 "data_size": 0 00:14:08.009 }, 00:14:08.009 { 00:14:08.009 "name": "BaseBdev4", 00:14:08.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.009 "is_configured": false, 00:14:08.009 "data_offset": 0, 00:14:08.009 "data_size": 0 00:14:08.009 } 00:14:08.009 ] 00:14:08.009 }' 00:14:08.009 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.009 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.269 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:08.269 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.269 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.269 [2024-11-28 16:26:59.971227] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:08.269 BaseBdev2 00:14:08.269 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.269 16:26:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:08.269 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:08.269 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:08.269 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:08.269 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:08.269 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:08.269 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:08.269 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.269 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.269 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.269 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:08.269 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.269 16:26:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.269 [ 00:14:08.269 { 00:14:08.269 "name": "BaseBdev2", 00:14:08.269 "aliases": [ 00:14:08.269 "e196d1d5-41fd-487e-b3ed-d731edea495e" 00:14:08.269 ], 00:14:08.269 "product_name": "Malloc disk", 00:14:08.269 "block_size": 512, 00:14:08.269 "num_blocks": 65536, 00:14:08.269 "uuid": "e196d1d5-41fd-487e-b3ed-d731edea495e", 00:14:08.269 "assigned_rate_limits": { 00:14:08.269 "rw_ios_per_sec": 0, 00:14:08.269 "rw_mbytes_per_sec": 0, 00:14:08.269 "r_mbytes_per_sec": 0, 00:14:08.269 "w_mbytes_per_sec": 0 00:14:08.269 }, 00:14:08.269 "claimed": true, 00:14:08.269 "claim_type": "exclusive_write", 00:14:08.269 "zoned": false, 00:14:08.269 "supported_io_types": { 00:14:08.269 "read": true, 00:14:08.269 "write": true, 00:14:08.269 "unmap": true, 00:14:08.269 "flush": true, 00:14:08.269 "reset": true, 00:14:08.269 "nvme_admin": false, 00:14:08.269 "nvme_io": false, 00:14:08.269 "nvme_io_md": false, 00:14:08.269 "write_zeroes": true, 00:14:08.269 "zcopy": true, 00:14:08.269 "get_zone_info": false, 00:14:08.269 "zone_management": false, 00:14:08.269 "zone_append": false, 00:14:08.269 "compare": false, 00:14:08.269 "compare_and_write": false, 00:14:08.269 "abort": true, 00:14:08.269 "seek_hole": false, 00:14:08.269 "seek_data": false, 00:14:08.269 "copy": true, 00:14:08.269 "nvme_iov_md": false 00:14:08.269 }, 00:14:08.269 "memory_domains": [ 00:14:08.269 { 00:14:08.269 "dma_device_id": "system", 00:14:08.269 "dma_device_type": 1 00:14:08.269 }, 00:14:08.269 { 00:14:08.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.269 "dma_device_type": 2 00:14:08.269 } 00:14:08.269 ], 00:14:08.269 "driver_specific": {} 00:14:08.269 } 00:14:08.269 ] 00:14:08.269 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.269 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:08.269 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:08.269 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:08.269 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:08.269 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.269 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.269 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.269 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.269 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:08.269 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.269 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.269 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.269 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.269 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.269 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.269 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.269 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.529 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.529 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.529 "name": "Existed_Raid", 00:14:08.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.529 "strip_size_kb": 64, 00:14:08.529 "state": "configuring", 00:14:08.529 "raid_level": "raid5f", 00:14:08.529 "superblock": false, 00:14:08.529 "num_base_bdevs": 4, 00:14:08.529 "num_base_bdevs_discovered": 2, 00:14:08.529 "num_base_bdevs_operational": 4, 00:14:08.529 "base_bdevs_list": [ 00:14:08.529 { 00:14:08.529 "name": "BaseBdev1", 00:14:08.529 "uuid": "a9b0ed4c-03da-4cd8-a568-f89a1aa0fca3", 00:14:08.529 "is_configured": true, 00:14:08.529 "data_offset": 0, 00:14:08.529 "data_size": 65536 00:14:08.529 }, 00:14:08.529 { 00:14:08.529 "name": "BaseBdev2", 00:14:08.529 "uuid": "e196d1d5-41fd-487e-b3ed-d731edea495e", 00:14:08.529 "is_configured": true, 00:14:08.529 "data_offset": 0, 00:14:08.529 "data_size": 65536 00:14:08.529 }, 00:14:08.529 { 00:14:08.529 "name": "BaseBdev3", 00:14:08.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.529 "is_configured": false, 00:14:08.529 "data_offset": 0, 00:14:08.529 "data_size": 0 00:14:08.529 }, 00:14:08.529 { 00:14:08.529 "name": "BaseBdev4", 00:14:08.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.529 "is_configured": false, 00:14:08.529 "data_offset": 0, 00:14:08.529 "data_size": 0 00:14:08.529 } 00:14:08.529 ] 00:14:08.529 }' 00:14:08.529 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.529 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.789 [2024-11-28 16:27:00.470976] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:08.789 BaseBdev3 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.789 [ 00:14:08.789 { 00:14:08.789 "name": "BaseBdev3", 00:14:08.789 "aliases": [ 00:14:08.789 "95e4ca78-f866-44ce-9169-33760bd8160f" 00:14:08.789 ], 00:14:08.789 "product_name": "Malloc disk", 00:14:08.789 "block_size": 512, 00:14:08.789 "num_blocks": 65536, 00:14:08.789 "uuid": "95e4ca78-f866-44ce-9169-33760bd8160f", 00:14:08.789 "assigned_rate_limits": { 00:14:08.789 "rw_ios_per_sec": 0, 00:14:08.789 "rw_mbytes_per_sec": 0, 00:14:08.789 "r_mbytes_per_sec": 0, 00:14:08.789 "w_mbytes_per_sec": 0 00:14:08.789 }, 00:14:08.789 "claimed": true, 00:14:08.789 "claim_type": "exclusive_write", 00:14:08.789 "zoned": false, 00:14:08.789 "supported_io_types": { 00:14:08.789 "read": true, 00:14:08.789 "write": true, 00:14:08.789 "unmap": true, 00:14:08.789 "flush": true, 00:14:08.789 "reset": true, 00:14:08.789 "nvme_admin": false, 00:14:08.789 "nvme_io": false, 00:14:08.789 "nvme_io_md": false, 00:14:08.789 "write_zeroes": true, 00:14:08.789 "zcopy": true, 00:14:08.789 "get_zone_info": false, 00:14:08.789 "zone_management": false, 00:14:08.789 "zone_append": false, 00:14:08.789 "compare": false, 00:14:08.789 "compare_and_write": false, 00:14:08.789 "abort": true, 00:14:08.789 "seek_hole": false, 00:14:08.789 "seek_data": false, 00:14:08.789 "copy": true, 00:14:08.789 "nvme_iov_md": false 00:14:08.789 }, 00:14:08.789 "memory_domains": [ 00:14:08.789 { 00:14:08.789 "dma_device_id": "system", 00:14:08.789 "dma_device_type": 1 00:14:08.789 }, 00:14:08.789 { 00:14:08.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.789 "dma_device_type": 2 00:14:08.789 } 00:14:08.789 ], 00:14:08.789 "driver_specific": {} 00:14:08.789 } 00:14:08.789 ] 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.789 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.048 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.048 "name": "Existed_Raid", 00:14:09.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.049 "strip_size_kb": 64, 00:14:09.049 "state": "configuring", 00:14:09.049 "raid_level": "raid5f", 00:14:09.049 "superblock": false, 00:14:09.049 "num_base_bdevs": 4, 00:14:09.049 "num_base_bdevs_discovered": 3, 00:14:09.049 "num_base_bdevs_operational": 4, 00:14:09.049 "base_bdevs_list": [ 00:14:09.049 { 00:14:09.049 "name": "BaseBdev1", 00:14:09.049 "uuid": "a9b0ed4c-03da-4cd8-a568-f89a1aa0fca3", 00:14:09.049 "is_configured": true, 00:14:09.049 "data_offset": 0, 00:14:09.049 "data_size": 65536 00:14:09.049 }, 00:14:09.049 { 00:14:09.049 "name": "BaseBdev2", 00:14:09.049 "uuid": "e196d1d5-41fd-487e-b3ed-d731edea495e", 00:14:09.049 "is_configured": true, 00:14:09.049 "data_offset": 0, 00:14:09.049 "data_size": 65536 00:14:09.049 }, 00:14:09.049 { 00:14:09.049 "name": "BaseBdev3", 00:14:09.049 "uuid": "95e4ca78-f866-44ce-9169-33760bd8160f", 00:14:09.049 "is_configured": true, 00:14:09.049 "data_offset": 0, 00:14:09.049 "data_size": 65536 00:14:09.049 }, 00:14:09.049 { 00:14:09.049 "name": "BaseBdev4", 00:14:09.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.049 "is_configured": false, 00:14:09.049 "data_offset": 0, 00:14:09.049 "data_size": 0 00:14:09.049 } 00:14:09.049 ] 00:14:09.049 }' 00:14:09.049 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.049 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.308 16:27:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:09.308 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.308 16:27:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.308 [2024-11-28 16:27:00.999018] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:09.308 [2024-11-28 16:27:00.999087] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:09.308 [2024-11-28 16:27:00.999102] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:09.308 [2024-11-28 16:27:00.999409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:09.308 [2024-11-28 16:27:00.999924] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:09.308 [2024-11-28 16:27:00.999946] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:14:09.308 [2024-11-28 16:27:01.000173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.308 BaseBdev4 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.308 [ 00:14:09.308 { 00:14:09.308 "name": "BaseBdev4", 00:14:09.308 "aliases": [ 00:14:09.308 "e3cf84f1-cb41-4131-a9e1-1fff93acfaee" 00:14:09.308 ], 00:14:09.308 "product_name": "Malloc disk", 00:14:09.308 "block_size": 512, 00:14:09.308 "num_blocks": 65536, 00:14:09.308 "uuid": "e3cf84f1-cb41-4131-a9e1-1fff93acfaee", 00:14:09.308 "assigned_rate_limits": { 00:14:09.308 "rw_ios_per_sec": 0, 00:14:09.308 "rw_mbytes_per_sec": 0, 00:14:09.308 "r_mbytes_per_sec": 0, 00:14:09.308 "w_mbytes_per_sec": 0 00:14:09.308 }, 00:14:09.308 "claimed": true, 00:14:09.308 "claim_type": "exclusive_write", 00:14:09.308 "zoned": false, 00:14:09.308 "supported_io_types": { 00:14:09.308 "read": true, 00:14:09.308 "write": true, 00:14:09.308 "unmap": true, 00:14:09.308 "flush": true, 00:14:09.308 "reset": true, 00:14:09.308 "nvme_admin": false, 00:14:09.308 "nvme_io": false, 00:14:09.308 "nvme_io_md": false, 00:14:09.308 "write_zeroes": true, 00:14:09.308 "zcopy": true, 00:14:09.308 "get_zone_info": false, 00:14:09.308 "zone_management": false, 00:14:09.308 "zone_append": false, 00:14:09.308 "compare": false, 00:14:09.308 "compare_and_write": false, 00:14:09.308 "abort": true, 00:14:09.308 "seek_hole": false, 00:14:09.308 "seek_data": false, 00:14:09.308 "copy": true, 00:14:09.308 "nvme_iov_md": false 00:14:09.308 }, 00:14:09.308 "memory_domains": [ 00:14:09.308 { 00:14:09.308 "dma_device_id": "system", 00:14:09.308 "dma_device_type": 1 00:14:09.308 }, 00:14:09.308 { 00:14:09.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.308 "dma_device_type": 2 00:14:09.308 } 00:14:09.308 ], 00:14:09.308 "driver_specific": {} 00:14:09.308 } 00:14:09.308 ] 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.308 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.309 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.309 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.309 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.309 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.309 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.567 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.567 "name": "Existed_Raid", 00:14:09.567 "uuid": "4366f679-003e-452c-aa92-1898037e59e4", 00:14:09.567 "strip_size_kb": 64, 00:14:09.567 "state": "online", 00:14:09.567 "raid_level": "raid5f", 00:14:09.567 "superblock": false, 00:14:09.567 "num_base_bdevs": 4, 00:14:09.567 "num_base_bdevs_discovered": 4, 00:14:09.567 "num_base_bdevs_operational": 4, 00:14:09.567 "base_bdevs_list": [ 00:14:09.567 { 00:14:09.567 "name": "BaseBdev1", 00:14:09.567 "uuid": "a9b0ed4c-03da-4cd8-a568-f89a1aa0fca3", 00:14:09.567 "is_configured": true, 00:14:09.567 "data_offset": 0, 00:14:09.567 "data_size": 65536 00:14:09.567 }, 00:14:09.567 { 00:14:09.567 "name": "BaseBdev2", 00:14:09.567 "uuid": "e196d1d5-41fd-487e-b3ed-d731edea495e", 00:14:09.567 "is_configured": true, 00:14:09.567 "data_offset": 0, 00:14:09.567 "data_size": 65536 00:14:09.567 }, 00:14:09.567 { 00:14:09.567 "name": "BaseBdev3", 00:14:09.567 "uuid": "95e4ca78-f866-44ce-9169-33760bd8160f", 00:14:09.567 "is_configured": true, 00:14:09.567 "data_offset": 0, 00:14:09.567 "data_size": 65536 00:14:09.567 }, 00:14:09.567 { 00:14:09.567 "name": "BaseBdev4", 00:14:09.567 "uuid": "e3cf84f1-cb41-4131-a9e1-1fff93acfaee", 00:14:09.567 "is_configured": true, 00:14:09.567 "data_offset": 0, 00:14:09.567 "data_size": 65536 00:14:09.567 } 00:14:09.567 ] 00:14:09.567 }' 00:14:09.567 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.567 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.826 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:09.826 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:09.826 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:09.826 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:09.826 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:09.826 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:09.826 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:09.826 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:09.826 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.826 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.826 [2024-11-28 16:27:01.414545] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:09.826 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.826 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:09.826 "name": "Existed_Raid", 00:14:09.826 "aliases": [ 00:14:09.826 "4366f679-003e-452c-aa92-1898037e59e4" 00:14:09.826 ], 00:14:09.826 "product_name": "Raid Volume", 00:14:09.826 "block_size": 512, 00:14:09.826 "num_blocks": 196608, 00:14:09.826 "uuid": "4366f679-003e-452c-aa92-1898037e59e4", 00:14:09.826 "assigned_rate_limits": { 00:14:09.826 "rw_ios_per_sec": 0, 00:14:09.826 "rw_mbytes_per_sec": 0, 00:14:09.826 "r_mbytes_per_sec": 0, 00:14:09.826 "w_mbytes_per_sec": 0 00:14:09.826 }, 00:14:09.826 "claimed": false, 00:14:09.826 "zoned": false, 00:14:09.826 "supported_io_types": { 00:14:09.826 "read": true, 00:14:09.826 "write": true, 00:14:09.826 "unmap": false, 00:14:09.826 "flush": false, 00:14:09.826 "reset": true, 00:14:09.826 "nvme_admin": false, 00:14:09.826 "nvme_io": false, 00:14:09.826 "nvme_io_md": false, 00:14:09.826 "write_zeroes": true, 00:14:09.826 "zcopy": false, 00:14:09.826 "get_zone_info": false, 00:14:09.826 "zone_management": false, 00:14:09.826 "zone_append": false, 00:14:09.826 "compare": false, 00:14:09.826 "compare_and_write": false, 00:14:09.826 "abort": false, 00:14:09.826 "seek_hole": false, 00:14:09.826 "seek_data": false, 00:14:09.826 "copy": false, 00:14:09.826 "nvme_iov_md": false 00:14:09.826 }, 00:14:09.826 "driver_specific": { 00:14:09.826 "raid": { 00:14:09.826 "uuid": "4366f679-003e-452c-aa92-1898037e59e4", 00:14:09.826 "strip_size_kb": 64, 00:14:09.826 "state": "online", 00:14:09.826 "raid_level": "raid5f", 00:14:09.826 "superblock": false, 00:14:09.827 "num_base_bdevs": 4, 00:14:09.827 "num_base_bdevs_discovered": 4, 00:14:09.827 "num_base_bdevs_operational": 4, 00:14:09.827 "base_bdevs_list": [ 00:14:09.827 { 00:14:09.827 "name": "BaseBdev1", 00:14:09.827 "uuid": "a9b0ed4c-03da-4cd8-a568-f89a1aa0fca3", 00:14:09.827 "is_configured": true, 00:14:09.827 "data_offset": 0, 00:14:09.827 "data_size": 65536 00:14:09.827 }, 00:14:09.827 { 00:14:09.827 "name": "BaseBdev2", 00:14:09.827 "uuid": "e196d1d5-41fd-487e-b3ed-d731edea495e", 00:14:09.827 "is_configured": true, 00:14:09.827 "data_offset": 0, 00:14:09.827 "data_size": 65536 00:14:09.827 }, 00:14:09.827 { 00:14:09.827 "name": "BaseBdev3", 00:14:09.827 "uuid": "95e4ca78-f866-44ce-9169-33760bd8160f", 00:14:09.827 "is_configured": true, 00:14:09.827 "data_offset": 0, 00:14:09.827 "data_size": 65536 00:14:09.827 }, 00:14:09.827 { 00:14:09.827 "name": "BaseBdev4", 00:14:09.827 "uuid": "e3cf84f1-cb41-4131-a9e1-1fff93acfaee", 00:14:09.827 "is_configured": true, 00:14:09.827 "data_offset": 0, 00:14:09.827 "data_size": 65536 00:14:09.827 } 00:14:09.827 ] 00:14:09.827 } 00:14:09.827 } 00:14:09.827 }' 00:14:09.827 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:09.827 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:09.827 BaseBdev2 00:14:09.827 BaseBdev3 00:14:09.827 BaseBdev4' 00:14:09.827 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:09.827 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:09.827 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:09.827 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:09.827 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:09.827 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.827 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.827 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.827 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:09.827 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:09.827 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:09.827 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:09.827 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.827 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:09.827 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.087 [2024-11-28 16:27:01.741814] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.087 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.087 "name": "Existed_Raid", 00:14:10.087 "uuid": "4366f679-003e-452c-aa92-1898037e59e4", 00:14:10.087 "strip_size_kb": 64, 00:14:10.087 "state": "online", 00:14:10.087 "raid_level": "raid5f", 00:14:10.087 "superblock": false, 00:14:10.087 "num_base_bdevs": 4, 00:14:10.087 "num_base_bdevs_discovered": 3, 00:14:10.087 "num_base_bdevs_operational": 3, 00:14:10.087 "base_bdevs_list": [ 00:14:10.087 { 00:14:10.087 "name": null, 00:14:10.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.087 "is_configured": false, 00:14:10.087 "data_offset": 0, 00:14:10.087 "data_size": 65536 00:14:10.087 }, 00:14:10.087 { 00:14:10.088 "name": "BaseBdev2", 00:14:10.088 "uuid": "e196d1d5-41fd-487e-b3ed-d731edea495e", 00:14:10.088 "is_configured": true, 00:14:10.088 "data_offset": 0, 00:14:10.088 "data_size": 65536 00:14:10.088 }, 00:14:10.088 { 00:14:10.088 "name": "BaseBdev3", 00:14:10.088 "uuid": "95e4ca78-f866-44ce-9169-33760bd8160f", 00:14:10.088 "is_configured": true, 00:14:10.088 "data_offset": 0, 00:14:10.088 "data_size": 65536 00:14:10.088 }, 00:14:10.088 { 00:14:10.088 "name": "BaseBdev4", 00:14:10.088 "uuid": "e3cf84f1-cb41-4131-a9e1-1fff93acfaee", 00:14:10.088 "is_configured": true, 00:14:10.088 "data_offset": 0, 00:14:10.088 "data_size": 65536 00:14:10.088 } 00:14:10.088 ] 00:14:10.088 }' 00:14:10.088 16:27:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.088 16:27:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.656 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.657 [2024-11-28 16:27:02.277564] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:10.657 [2024-11-28 16:27:02.277673] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:10.657 [2024-11-28 16:27:02.298058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.657 [2024-11-28 16:27:02.341969] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.657 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.657 [2024-11-28 16:27:02.414529] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:10.657 [2024-11-28 16:27:02.414573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.918 BaseBdev2 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.918 [ 00:14:10.918 { 00:14:10.918 "name": "BaseBdev2", 00:14:10.918 "aliases": [ 00:14:10.918 "243a93dc-3ca0-4577-84c9-3f6cb79b7a89" 00:14:10.918 ], 00:14:10.918 "product_name": "Malloc disk", 00:14:10.918 "block_size": 512, 00:14:10.918 "num_blocks": 65536, 00:14:10.918 "uuid": "243a93dc-3ca0-4577-84c9-3f6cb79b7a89", 00:14:10.918 "assigned_rate_limits": { 00:14:10.918 "rw_ios_per_sec": 0, 00:14:10.918 "rw_mbytes_per_sec": 0, 00:14:10.918 "r_mbytes_per_sec": 0, 00:14:10.918 "w_mbytes_per_sec": 0 00:14:10.918 }, 00:14:10.918 "claimed": false, 00:14:10.918 "zoned": false, 00:14:10.918 "supported_io_types": { 00:14:10.918 "read": true, 00:14:10.918 "write": true, 00:14:10.918 "unmap": true, 00:14:10.918 "flush": true, 00:14:10.918 "reset": true, 00:14:10.918 "nvme_admin": false, 00:14:10.918 "nvme_io": false, 00:14:10.918 "nvme_io_md": false, 00:14:10.918 "write_zeroes": true, 00:14:10.918 "zcopy": true, 00:14:10.918 "get_zone_info": false, 00:14:10.918 "zone_management": false, 00:14:10.918 "zone_append": false, 00:14:10.918 "compare": false, 00:14:10.918 "compare_and_write": false, 00:14:10.918 "abort": true, 00:14:10.918 "seek_hole": false, 00:14:10.918 "seek_data": false, 00:14:10.918 "copy": true, 00:14:10.918 "nvme_iov_md": false 00:14:10.918 }, 00:14:10.918 "memory_domains": [ 00:14:10.918 { 00:14:10.918 "dma_device_id": "system", 00:14:10.918 "dma_device_type": 1 00:14:10.918 }, 00:14:10.918 { 00:14:10.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.918 "dma_device_type": 2 00:14:10.918 } 00:14:10.918 ], 00:14:10.918 "driver_specific": {} 00:14:10.918 } 00:14:10.918 ] 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.918 BaseBdev3 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.918 [ 00:14:10.918 { 00:14:10.918 "name": "BaseBdev3", 00:14:10.918 "aliases": [ 00:14:10.918 "0720f56e-5dd5-4f54-a67c-37f8340e5e30" 00:14:10.918 ], 00:14:10.918 "product_name": "Malloc disk", 00:14:10.918 "block_size": 512, 00:14:10.918 "num_blocks": 65536, 00:14:10.918 "uuid": "0720f56e-5dd5-4f54-a67c-37f8340e5e30", 00:14:10.918 "assigned_rate_limits": { 00:14:10.918 "rw_ios_per_sec": 0, 00:14:10.918 "rw_mbytes_per_sec": 0, 00:14:10.918 "r_mbytes_per_sec": 0, 00:14:10.918 "w_mbytes_per_sec": 0 00:14:10.918 }, 00:14:10.918 "claimed": false, 00:14:10.918 "zoned": false, 00:14:10.918 "supported_io_types": { 00:14:10.918 "read": true, 00:14:10.918 "write": true, 00:14:10.918 "unmap": true, 00:14:10.918 "flush": true, 00:14:10.918 "reset": true, 00:14:10.918 "nvme_admin": false, 00:14:10.918 "nvme_io": false, 00:14:10.918 "nvme_io_md": false, 00:14:10.918 "write_zeroes": true, 00:14:10.918 "zcopy": true, 00:14:10.918 "get_zone_info": false, 00:14:10.918 "zone_management": false, 00:14:10.918 "zone_append": false, 00:14:10.918 "compare": false, 00:14:10.918 "compare_and_write": false, 00:14:10.918 "abort": true, 00:14:10.918 "seek_hole": false, 00:14:10.918 "seek_data": false, 00:14:10.918 "copy": true, 00:14:10.918 "nvme_iov_md": false 00:14:10.918 }, 00:14:10.918 "memory_domains": [ 00:14:10.918 { 00:14:10.918 "dma_device_id": "system", 00:14:10.918 "dma_device_type": 1 00:14:10.918 }, 00:14:10.918 { 00:14:10.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.918 "dma_device_type": 2 00:14:10.918 } 00:14:10.918 ], 00:14:10.918 "driver_specific": {} 00:14:10.918 } 00:14:10.918 ] 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:10.918 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.919 BaseBdev4 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.919 [ 00:14:10.919 { 00:14:10.919 "name": "BaseBdev4", 00:14:10.919 "aliases": [ 00:14:10.919 "bf4598df-b351-4761-b52f-a2a7a15ae7c8" 00:14:10.919 ], 00:14:10.919 "product_name": "Malloc disk", 00:14:10.919 "block_size": 512, 00:14:10.919 "num_blocks": 65536, 00:14:10.919 "uuid": "bf4598df-b351-4761-b52f-a2a7a15ae7c8", 00:14:10.919 "assigned_rate_limits": { 00:14:10.919 "rw_ios_per_sec": 0, 00:14:10.919 "rw_mbytes_per_sec": 0, 00:14:10.919 "r_mbytes_per_sec": 0, 00:14:10.919 "w_mbytes_per_sec": 0 00:14:10.919 }, 00:14:10.919 "claimed": false, 00:14:10.919 "zoned": false, 00:14:10.919 "supported_io_types": { 00:14:10.919 "read": true, 00:14:10.919 "write": true, 00:14:10.919 "unmap": true, 00:14:10.919 "flush": true, 00:14:10.919 "reset": true, 00:14:10.919 "nvme_admin": false, 00:14:10.919 "nvme_io": false, 00:14:10.919 "nvme_io_md": false, 00:14:10.919 "write_zeroes": true, 00:14:10.919 "zcopy": true, 00:14:10.919 "get_zone_info": false, 00:14:10.919 "zone_management": false, 00:14:10.919 "zone_append": false, 00:14:10.919 "compare": false, 00:14:10.919 "compare_and_write": false, 00:14:10.919 "abort": true, 00:14:10.919 "seek_hole": false, 00:14:10.919 "seek_data": false, 00:14:10.919 "copy": true, 00:14:10.919 "nvme_iov_md": false 00:14:10.919 }, 00:14:10.919 "memory_domains": [ 00:14:10.919 { 00:14:10.919 "dma_device_id": "system", 00:14:10.919 "dma_device_type": 1 00:14:10.919 }, 00:14:10.919 { 00:14:10.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.919 "dma_device_type": 2 00:14:10.919 } 00:14:10.919 ], 00:14:10.919 "driver_specific": {} 00:14:10.919 } 00:14:10.919 ] 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.919 [2024-11-28 16:27:02.647042] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:10.919 [2024-11-28 16:27:02.647087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:10.919 [2024-11-28 16:27:02.647109] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.919 [2024-11-28 16:27:02.649228] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:10.919 [2024-11-28 16:27:02.649278] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.919 "name": "Existed_Raid", 00:14:10.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.919 "strip_size_kb": 64, 00:14:10.919 "state": "configuring", 00:14:10.919 "raid_level": "raid5f", 00:14:10.919 "superblock": false, 00:14:10.919 "num_base_bdevs": 4, 00:14:10.919 "num_base_bdevs_discovered": 3, 00:14:10.919 "num_base_bdevs_operational": 4, 00:14:10.919 "base_bdevs_list": [ 00:14:10.919 { 00:14:10.919 "name": "BaseBdev1", 00:14:10.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.919 "is_configured": false, 00:14:10.919 "data_offset": 0, 00:14:10.919 "data_size": 0 00:14:10.919 }, 00:14:10.919 { 00:14:10.919 "name": "BaseBdev2", 00:14:10.919 "uuid": "243a93dc-3ca0-4577-84c9-3f6cb79b7a89", 00:14:10.919 "is_configured": true, 00:14:10.919 "data_offset": 0, 00:14:10.919 "data_size": 65536 00:14:10.919 }, 00:14:10.919 { 00:14:10.919 "name": "BaseBdev3", 00:14:10.919 "uuid": "0720f56e-5dd5-4f54-a67c-37f8340e5e30", 00:14:10.919 "is_configured": true, 00:14:10.919 "data_offset": 0, 00:14:10.919 "data_size": 65536 00:14:10.919 }, 00:14:10.919 { 00:14:10.919 "name": "BaseBdev4", 00:14:10.919 "uuid": "bf4598df-b351-4761-b52f-a2a7a15ae7c8", 00:14:10.919 "is_configured": true, 00:14:10.919 "data_offset": 0, 00:14:10.919 "data_size": 65536 00:14:10.919 } 00:14:10.919 ] 00:14:10.919 }' 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.919 16:27:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.487 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:11.487 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.487 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.487 [2024-11-28 16:27:03.050298] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:11.487 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.487 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:11.487 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.487 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:11.487 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:11.487 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.487 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:11.487 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.487 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.487 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.487 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.487 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.487 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.487 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.487 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.487 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.487 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.487 "name": "Existed_Raid", 00:14:11.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.487 "strip_size_kb": 64, 00:14:11.487 "state": "configuring", 00:14:11.487 "raid_level": "raid5f", 00:14:11.487 "superblock": false, 00:14:11.487 "num_base_bdevs": 4, 00:14:11.487 "num_base_bdevs_discovered": 2, 00:14:11.487 "num_base_bdevs_operational": 4, 00:14:11.487 "base_bdevs_list": [ 00:14:11.487 { 00:14:11.487 "name": "BaseBdev1", 00:14:11.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.487 "is_configured": false, 00:14:11.487 "data_offset": 0, 00:14:11.487 "data_size": 0 00:14:11.487 }, 00:14:11.487 { 00:14:11.487 "name": null, 00:14:11.487 "uuid": "243a93dc-3ca0-4577-84c9-3f6cb79b7a89", 00:14:11.487 "is_configured": false, 00:14:11.487 "data_offset": 0, 00:14:11.487 "data_size": 65536 00:14:11.487 }, 00:14:11.487 { 00:14:11.487 "name": "BaseBdev3", 00:14:11.487 "uuid": "0720f56e-5dd5-4f54-a67c-37f8340e5e30", 00:14:11.487 "is_configured": true, 00:14:11.487 "data_offset": 0, 00:14:11.487 "data_size": 65536 00:14:11.487 }, 00:14:11.487 { 00:14:11.487 "name": "BaseBdev4", 00:14:11.487 "uuid": "bf4598df-b351-4761-b52f-a2a7a15ae7c8", 00:14:11.487 "is_configured": true, 00:14:11.487 "data_offset": 0, 00:14:11.487 "data_size": 65536 00:14:11.487 } 00:14:11.487 ] 00:14:11.487 }' 00:14:11.487 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.487 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.747 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:12.006 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.006 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.007 [2024-11-28 16:27:03.582015] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:12.007 BaseBdev1 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.007 [ 00:14:12.007 { 00:14:12.007 "name": "BaseBdev1", 00:14:12.007 "aliases": [ 00:14:12.007 "56b87cfc-46dc-4431-9960-99be19711f7a" 00:14:12.007 ], 00:14:12.007 "product_name": "Malloc disk", 00:14:12.007 "block_size": 512, 00:14:12.007 "num_blocks": 65536, 00:14:12.007 "uuid": "56b87cfc-46dc-4431-9960-99be19711f7a", 00:14:12.007 "assigned_rate_limits": { 00:14:12.007 "rw_ios_per_sec": 0, 00:14:12.007 "rw_mbytes_per_sec": 0, 00:14:12.007 "r_mbytes_per_sec": 0, 00:14:12.007 "w_mbytes_per_sec": 0 00:14:12.007 }, 00:14:12.007 "claimed": true, 00:14:12.007 "claim_type": "exclusive_write", 00:14:12.007 "zoned": false, 00:14:12.007 "supported_io_types": { 00:14:12.007 "read": true, 00:14:12.007 "write": true, 00:14:12.007 "unmap": true, 00:14:12.007 "flush": true, 00:14:12.007 "reset": true, 00:14:12.007 "nvme_admin": false, 00:14:12.007 "nvme_io": false, 00:14:12.007 "nvme_io_md": false, 00:14:12.007 "write_zeroes": true, 00:14:12.007 "zcopy": true, 00:14:12.007 "get_zone_info": false, 00:14:12.007 "zone_management": false, 00:14:12.007 "zone_append": false, 00:14:12.007 "compare": false, 00:14:12.007 "compare_and_write": false, 00:14:12.007 "abort": true, 00:14:12.007 "seek_hole": false, 00:14:12.007 "seek_data": false, 00:14:12.007 "copy": true, 00:14:12.007 "nvme_iov_md": false 00:14:12.007 }, 00:14:12.007 "memory_domains": [ 00:14:12.007 { 00:14:12.007 "dma_device_id": "system", 00:14:12.007 "dma_device_type": 1 00:14:12.007 }, 00:14:12.007 { 00:14:12.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.007 "dma_device_type": 2 00:14:12.007 } 00:14:12.007 ], 00:14:12.007 "driver_specific": {} 00:14:12.007 } 00:14:12.007 ] 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.007 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.007 "name": "Existed_Raid", 00:14:12.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.007 "strip_size_kb": 64, 00:14:12.007 "state": "configuring", 00:14:12.007 "raid_level": "raid5f", 00:14:12.007 "superblock": false, 00:14:12.007 "num_base_bdevs": 4, 00:14:12.007 "num_base_bdevs_discovered": 3, 00:14:12.007 "num_base_bdevs_operational": 4, 00:14:12.007 "base_bdevs_list": [ 00:14:12.007 { 00:14:12.007 "name": "BaseBdev1", 00:14:12.007 "uuid": "56b87cfc-46dc-4431-9960-99be19711f7a", 00:14:12.007 "is_configured": true, 00:14:12.007 "data_offset": 0, 00:14:12.007 "data_size": 65536 00:14:12.007 }, 00:14:12.007 { 00:14:12.007 "name": null, 00:14:12.007 "uuid": "243a93dc-3ca0-4577-84c9-3f6cb79b7a89", 00:14:12.007 "is_configured": false, 00:14:12.007 "data_offset": 0, 00:14:12.007 "data_size": 65536 00:14:12.007 }, 00:14:12.007 { 00:14:12.007 "name": "BaseBdev3", 00:14:12.007 "uuid": "0720f56e-5dd5-4f54-a67c-37f8340e5e30", 00:14:12.007 "is_configured": true, 00:14:12.007 "data_offset": 0, 00:14:12.007 "data_size": 65536 00:14:12.007 }, 00:14:12.007 { 00:14:12.007 "name": "BaseBdev4", 00:14:12.007 "uuid": "bf4598df-b351-4761-b52f-a2a7a15ae7c8", 00:14:12.007 "is_configured": true, 00:14:12.007 "data_offset": 0, 00:14:12.007 "data_size": 65536 00:14:12.007 } 00:14:12.007 ] 00:14:12.007 }' 00:14:12.008 16:27:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.008 16:27:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.574 [2024-11-28 16:27:04.129074] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.574 "name": "Existed_Raid", 00:14:12.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.574 "strip_size_kb": 64, 00:14:12.574 "state": "configuring", 00:14:12.574 "raid_level": "raid5f", 00:14:12.574 "superblock": false, 00:14:12.574 "num_base_bdevs": 4, 00:14:12.574 "num_base_bdevs_discovered": 2, 00:14:12.574 "num_base_bdevs_operational": 4, 00:14:12.574 "base_bdevs_list": [ 00:14:12.574 { 00:14:12.574 "name": "BaseBdev1", 00:14:12.574 "uuid": "56b87cfc-46dc-4431-9960-99be19711f7a", 00:14:12.574 "is_configured": true, 00:14:12.574 "data_offset": 0, 00:14:12.574 "data_size": 65536 00:14:12.574 }, 00:14:12.574 { 00:14:12.574 "name": null, 00:14:12.574 "uuid": "243a93dc-3ca0-4577-84c9-3f6cb79b7a89", 00:14:12.574 "is_configured": false, 00:14:12.574 "data_offset": 0, 00:14:12.574 "data_size": 65536 00:14:12.574 }, 00:14:12.574 { 00:14:12.574 "name": null, 00:14:12.574 "uuid": "0720f56e-5dd5-4f54-a67c-37f8340e5e30", 00:14:12.574 "is_configured": false, 00:14:12.574 "data_offset": 0, 00:14:12.574 "data_size": 65536 00:14:12.574 }, 00:14:12.574 { 00:14:12.574 "name": "BaseBdev4", 00:14:12.574 "uuid": "bf4598df-b351-4761-b52f-a2a7a15ae7c8", 00:14:12.574 "is_configured": true, 00:14:12.574 "data_offset": 0, 00:14:12.574 "data_size": 65536 00:14:12.574 } 00:14:12.574 ] 00:14:12.574 }' 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.574 16:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.142 [2024-11-28 16:27:04.680222] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.142 "name": "Existed_Raid", 00:14:13.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.142 "strip_size_kb": 64, 00:14:13.142 "state": "configuring", 00:14:13.142 "raid_level": "raid5f", 00:14:13.142 "superblock": false, 00:14:13.142 "num_base_bdevs": 4, 00:14:13.142 "num_base_bdevs_discovered": 3, 00:14:13.142 "num_base_bdevs_operational": 4, 00:14:13.142 "base_bdevs_list": [ 00:14:13.142 { 00:14:13.142 "name": "BaseBdev1", 00:14:13.142 "uuid": "56b87cfc-46dc-4431-9960-99be19711f7a", 00:14:13.142 "is_configured": true, 00:14:13.142 "data_offset": 0, 00:14:13.142 "data_size": 65536 00:14:13.142 }, 00:14:13.142 { 00:14:13.142 "name": null, 00:14:13.142 "uuid": "243a93dc-3ca0-4577-84c9-3f6cb79b7a89", 00:14:13.142 "is_configured": false, 00:14:13.142 "data_offset": 0, 00:14:13.142 "data_size": 65536 00:14:13.142 }, 00:14:13.142 { 00:14:13.142 "name": "BaseBdev3", 00:14:13.142 "uuid": "0720f56e-5dd5-4f54-a67c-37f8340e5e30", 00:14:13.142 "is_configured": true, 00:14:13.142 "data_offset": 0, 00:14:13.142 "data_size": 65536 00:14:13.142 }, 00:14:13.142 { 00:14:13.142 "name": "BaseBdev4", 00:14:13.142 "uuid": "bf4598df-b351-4761-b52f-a2a7a15ae7c8", 00:14:13.142 "is_configured": true, 00:14:13.142 "data_offset": 0, 00:14:13.142 "data_size": 65536 00:14:13.142 } 00:14:13.142 ] 00:14:13.142 }' 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.142 16:27:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.401 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.401 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:13.401 16:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.401 16:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.401 16:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.660 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:13.660 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:13.660 16:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.660 16:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.660 [2024-11-28 16:27:05.183412] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:13.660 16:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.660 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:13.660 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.660 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.660 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.660 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.660 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:13.660 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.660 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.660 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.660 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.660 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.660 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.660 16:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.660 16:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.660 16:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.660 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.660 "name": "Existed_Raid", 00:14:13.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.660 "strip_size_kb": 64, 00:14:13.660 "state": "configuring", 00:14:13.660 "raid_level": "raid5f", 00:14:13.660 "superblock": false, 00:14:13.660 "num_base_bdevs": 4, 00:14:13.660 "num_base_bdevs_discovered": 2, 00:14:13.660 "num_base_bdevs_operational": 4, 00:14:13.660 "base_bdevs_list": [ 00:14:13.660 { 00:14:13.660 "name": null, 00:14:13.660 "uuid": "56b87cfc-46dc-4431-9960-99be19711f7a", 00:14:13.660 "is_configured": false, 00:14:13.660 "data_offset": 0, 00:14:13.660 "data_size": 65536 00:14:13.660 }, 00:14:13.660 { 00:14:13.660 "name": null, 00:14:13.660 "uuid": "243a93dc-3ca0-4577-84c9-3f6cb79b7a89", 00:14:13.660 "is_configured": false, 00:14:13.660 "data_offset": 0, 00:14:13.660 "data_size": 65536 00:14:13.660 }, 00:14:13.660 { 00:14:13.660 "name": "BaseBdev3", 00:14:13.660 "uuid": "0720f56e-5dd5-4f54-a67c-37f8340e5e30", 00:14:13.660 "is_configured": true, 00:14:13.660 "data_offset": 0, 00:14:13.660 "data_size": 65536 00:14:13.660 }, 00:14:13.660 { 00:14:13.660 "name": "BaseBdev4", 00:14:13.660 "uuid": "bf4598df-b351-4761-b52f-a2a7a15ae7c8", 00:14:13.660 "is_configured": true, 00:14:13.660 "data_offset": 0, 00:14:13.660 "data_size": 65536 00:14:13.660 } 00:14:13.660 ] 00:14:13.660 }' 00:14:13.661 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.661 16:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.920 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.920 16:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.920 16:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.920 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:14.179 16:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.179 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:14.179 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:14.179 16:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.179 16:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.179 [2024-11-28 16:27:05.734447] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:14.179 16:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.179 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:14.179 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.179 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.179 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.179 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.179 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.179 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.179 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.179 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.179 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.179 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.179 16:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.179 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.179 16:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.179 16:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.179 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.179 "name": "Existed_Raid", 00:14:14.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.179 "strip_size_kb": 64, 00:14:14.179 "state": "configuring", 00:14:14.179 "raid_level": "raid5f", 00:14:14.179 "superblock": false, 00:14:14.179 "num_base_bdevs": 4, 00:14:14.179 "num_base_bdevs_discovered": 3, 00:14:14.179 "num_base_bdevs_operational": 4, 00:14:14.179 "base_bdevs_list": [ 00:14:14.179 { 00:14:14.179 "name": null, 00:14:14.179 "uuid": "56b87cfc-46dc-4431-9960-99be19711f7a", 00:14:14.179 "is_configured": false, 00:14:14.179 "data_offset": 0, 00:14:14.179 "data_size": 65536 00:14:14.179 }, 00:14:14.179 { 00:14:14.180 "name": "BaseBdev2", 00:14:14.180 "uuid": "243a93dc-3ca0-4577-84c9-3f6cb79b7a89", 00:14:14.180 "is_configured": true, 00:14:14.180 "data_offset": 0, 00:14:14.180 "data_size": 65536 00:14:14.180 }, 00:14:14.180 { 00:14:14.180 "name": "BaseBdev3", 00:14:14.180 "uuid": "0720f56e-5dd5-4f54-a67c-37f8340e5e30", 00:14:14.180 "is_configured": true, 00:14:14.180 "data_offset": 0, 00:14:14.180 "data_size": 65536 00:14:14.180 }, 00:14:14.180 { 00:14:14.180 "name": "BaseBdev4", 00:14:14.180 "uuid": "bf4598df-b351-4761-b52f-a2a7a15ae7c8", 00:14:14.180 "is_configured": true, 00:14:14.180 "data_offset": 0, 00:14:14.180 "data_size": 65536 00:14:14.180 } 00:14:14.180 ] 00:14:14.180 }' 00:14:14.180 16:27:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.180 16:27:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.749 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.749 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.749 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.749 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:14.749 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.749 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:14.749 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.749 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.749 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.749 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:14.749 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.749 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 56b87cfc-46dc-4431-9960-99be19711f7a 00:14:14.749 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.749 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.749 [2024-11-28 16:27:06.345388] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:14.749 [2024-11-28 16:27:06.345441] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:14.749 [2024-11-28 16:27:06.345449] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:14.749 [2024-11-28 16:27:06.345700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:14.749 [2024-11-28 16:27:06.346164] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:14.749 [2024-11-28 16:27:06.346185] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:14:14.749 [2024-11-28 16:27:06.346358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.749 NewBaseBdev 00:14:14.749 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.749 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:14.749 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:14.749 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:14.749 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:14.749 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.750 [ 00:14:14.750 { 00:14:14.750 "name": "NewBaseBdev", 00:14:14.750 "aliases": [ 00:14:14.750 "56b87cfc-46dc-4431-9960-99be19711f7a" 00:14:14.750 ], 00:14:14.750 "product_name": "Malloc disk", 00:14:14.750 "block_size": 512, 00:14:14.750 "num_blocks": 65536, 00:14:14.750 "uuid": "56b87cfc-46dc-4431-9960-99be19711f7a", 00:14:14.750 "assigned_rate_limits": { 00:14:14.750 "rw_ios_per_sec": 0, 00:14:14.750 "rw_mbytes_per_sec": 0, 00:14:14.750 "r_mbytes_per_sec": 0, 00:14:14.750 "w_mbytes_per_sec": 0 00:14:14.750 }, 00:14:14.750 "claimed": true, 00:14:14.750 "claim_type": "exclusive_write", 00:14:14.750 "zoned": false, 00:14:14.750 "supported_io_types": { 00:14:14.750 "read": true, 00:14:14.750 "write": true, 00:14:14.750 "unmap": true, 00:14:14.750 "flush": true, 00:14:14.750 "reset": true, 00:14:14.750 "nvme_admin": false, 00:14:14.750 "nvme_io": false, 00:14:14.750 "nvme_io_md": false, 00:14:14.750 "write_zeroes": true, 00:14:14.750 "zcopy": true, 00:14:14.750 "get_zone_info": false, 00:14:14.750 "zone_management": false, 00:14:14.750 "zone_append": false, 00:14:14.750 "compare": false, 00:14:14.750 "compare_and_write": false, 00:14:14.750 "abort": true, 00:14:14.750 "seek_hole": false, 00:14:14.750 "seek_data": false, 00:14:14.750 "copy": true, 00:14:14.750 "nvme_iov_md": false 00:14:14.750 }, 00:14:14.750 "memory_domains": [ 00:14:14.750 { 00:14:14.750 "dma_device_id": "system", 00:14:14.750 "dma_device_type": 1 00:14:14.750 }, 00:14:14.750 { 00:14:14.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.750 "dma_device_type": 2 00:14:14.750 } 00:14:14.750 ], 00:14:14.750 "driver_specific": {} 00:14:14.750 } 00:14:14.750 ] 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.750 "name": "Existed_Raid", 00:14:14.750 "uuid": "2127ddf0-aea1-4dcf-8601-45ba20907e92", 00:14:14.750 "strip_size_kb": 64, 00:14:14.750 "state": "online", 00:14:14.750 "raid_level": "raid5f", 00:14:14.750 "superblock": false, 00:14:14.750 "num_base_bdevs": 4, 00:14:14.750 "num_base_bdevs_discovered": 4, 00:14:14.750 "num_base_bdevs_operational": 4, 00:14:14.750 "base_bdevs_list": [ 00:14:14.750 { 00:14:14.750 "name": "NewBaseBdev", 00:14:14.750 "uuid": "56b87cfc-46dc-4431-9960-99be19711f7a", 00:14:14.750 "is_configured": true, 00:14:14.750 "data_offset": 0, 00:14:14.750 "data_size": 65536 00:14:14.750 }, 00:14:14.750 { 00:14:14.750 "name": "BaseBdev2", 00:14:14.750 "uuid": "243a93dc-3ca0-4577-84c9-3f6cb79b7a89", 00:14:14.750 "is_configured": true, 00:14:14.750 "data_offset": 0, 00:14:14.750 "data_size": 65536 00:14:14.750 }, 00:14:14.750 { 00:14:14.750 "name": "BaseBdev3", 00:14:14.750 "uuid": "0720f56e-5dd5-4f54-a67c-37f8340e5e30", 00:14:14.750 "is_configured": true, 00:14:14.750 "data_offset": 0, 00:14:14.750 "data_size": 65536 00:14:14.750 }, 00:14:14.750 { 00:14:14.750 "name": "BaseBdev4", 00:14:14.750 "uuid": "bf4598df-b351-4761-b52f-a2a7a15ae7c8", 00:14:14.750 "is_configured": true, 00:14:14.750 "data_offset": 0, 00:14:14.750 "data_size": 65536 00:14:14.750 } 00:14:14.750 ] 00:14:14.750 }' 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.750 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.320 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:15.320 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:15.320 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:15.320 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:15.320 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:15.320 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:15.320 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:15.320 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:15.320 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.320 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.320 [2024-11-28 16:27:06.800792] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:15.320 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.320 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:15.320 "name": "Existed_Raid", 00:14:15.320 "aliases": [ 00:14:15.320 "2127ddf0-aea1-4dcf-8601-45ba20907e92" 00:14:15.320 ], 00:14:15.320 "product_name": "Raid Volume", 00:14:15.320 "block_size": 512, 00:14:15.320 "num_blocks": 196608, 00:14:15.320 "uuid": "2127ddf0-aea1-4dcf-8601-45ba20907e92", 00:14:15.320 "assigned_rate_limits": { 00:14:15.320 "rw_ios_per_sec": 0, 00:14:15.320 "rw_mbytes_per_sec": 0, 00:14:15.320 "r_mbytes_per_sec": 0, 00:14:15.320 "w_mbytes_per_sec": 0 00:14:15.320 }, 00:14:15.320 "claimed": false, 00:14:15.320 "zoned": false, 00:14:15.320 "supported_io_types": { 00:14:15.320 "read": true, 00:14:15.320 "write": true, 00:14:15.320 "unmap": false, 00:14:15.320 "flush": false, 00:14:15.320 "reset": true, 00:14:15.320 "nvme_admin": false, 00:14:15.320 "nvme_io": false, 00:14:15.320 "nvme_io_md": false, 00:14:15.320 "write_zeroes": true, 00:14:15.320 "zcopy": false, 00:14:15.320 "get_zone_info": false, 00:14:15.320 "zone_management": false, 00:14:15.320 "zone_append": false, 00:14:15.320 "compare": false, 00:14:15.320 "compare_and_write": false, 00:14:15.320 "abort": false, 00:14:15.320 "seek_hole": false, 00:14:15.320 "seek_data": false, 00:14:15.320 "copy": false, 00:14:15.320 "nvme_iov_md": false 00:14:15.320 }, 00:14:15.320 "driver_specific": { 00:14:15.320 "raid": { 00:14:15.320 "uuid": "2127ddf0-aea1-4dcf-8601-45ba20907e92", 00:14:15.320 "strip_size_kb": 64, 00:14:15.320 "state": "online", 00:14:15.320 "raid_level": "raid5f", 00:14:15.320 "superblock": false, 00:14:15.320 "num_base_bdevs": 4, 00:14:15.320 "num_base_bdevs_discovered": 4, 00:14:15.320 "num_base_bdevs_operational": 4, 00:14:15.320 "base_bdevs_list": [ 00:14:15.320 { 00:14:15.320 "name": "NewBaseBdev", 00:14:15.320 "uuid": "56b87cfc-46dc-4431-9960-99be19711f7a", 00:14:15.320 "is_configured": true, 00:14:15.320 "data_offset": 0, 00:14:15.320 "data_size": 65536 00:14:15.320 }, 00:14:15.320 { 00:14:15.320 "name": "BaseBdev2", 00:14:15.320 "uuid": "243a93dc-3ca0-4577-84c9-3f6cb79b7a89", 00:14:15.320 "is_configured": true, 00:14:15.320 "data_offset": 0, 00:14:15.320 "data_size": 65536 00:14:15.320 }, 00:14:15.320 { 00:14:15.320 "name": "BaseBdev3", 00:14:15.320 "uuid": "0720f56e-5dd5-4f54-a67c-37f8340e5e30", 00:14:15.320 "is_configured": true, 00:14:15.320 "data_offset": 0, 00:14:15.320 "data_size": 65536 00:14:15.320 }, 00:14:15.320 { 00:14:15.320 "name": "BaseBdev4", 00:14:15.320 "uuid": "bf4598df-b351-4761-b52f-a2a7a15ae7c8", 00:14:15.320 "is_configured": true, 00:14:15.320 "data_offset": 0, 00:14:15.320 "data_size": 65536 00:14:15.320 } 00:14:15.320 ] 00:14:15.320 } 00:14:15.320 } 00:14:15.320 }' 00:14:15.320 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:15.320 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:15.320 BaseBdev2 00:14:15.320 BaseBdev3 00:14:15.320 BaseBdev4' 00:14:15.320 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.321 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:15.321 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:15.321 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:15.321 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.321 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.321 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.321 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.321 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:15.321 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:15.321 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:15.321 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.321 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:15.321 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.321 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.321 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.321 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:15.321 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:15.321 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:15.321 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:15.321 16:27:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.321 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.321 16:27:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.321 16:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.321 16:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:15.321 16:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:15.321 16:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:15.321 16:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.321 16:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:15.321 16:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.321 16:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.321 16:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.321 16:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:15.321 16:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:15.321 16:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:15.321 16:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.321 16:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.321 [2024-11-28 16:27:07.052165] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:15.321 [2024-11-28 16:27:07.052192] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:15.321 [2024-11-28 16:27:07.052259] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:15.321 [2024-11-28 16:27:07.052546] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:15.321 [2024-11-28 16:27:07.052570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:14:15.321 16:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.321 16:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 93250 00:14:15.321 16:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 93250 ']' 00:14:15.321 16:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 93250 00:14:15.321 16:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:15.321 16:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:15.321 16:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93250 00:14:15.581 16:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:15.581 16:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:15.581 killing process with pid 93250 00:14:15.581 16:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93250' 00:14:15.581 16:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 93250 00:14:15.581 [2024-11-28 16:27:07.099888] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:15.581 16:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 93250 00:14:15.581 [2024-11-28 16:27:07.178935] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:15.842 16:27:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:15.842 ************************************ 00:14:15.842 END TEST raid5f_state_function_test 00:14:15.842 ************************************ 00:14:15.842 00:14:15.842 real 0m9.952s 00:14:15.842 user 0m16.597s 00:14:15.842 sys 0m2.341s 00:14:15.842 16:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:15.842 16:27:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.102 16:27:07 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:14:16.102 16:27:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:16.102 16:27:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:16.102 16:27:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:16.102 ************************************ 00:14:16.102 START TEST raid5f_state_function_test_sb 00:14:16.102 ************************************ 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=93909 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:16.102 Process raid pid: 93909 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93909' 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 93909 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 93909 ']' 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:16.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:16.102 16:27:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.102 [2024-11-28 16:27:07.738424] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:16.102 [2024-11-28 16:27:07.738541] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.362 [2024-11-28 16:27:07.902999] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.362 [2024-11-28 16:27:07.975624] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.362 [2024-11-28 16:27:08.051451] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.362 [2024-11-28 16:27:08.051491] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.930 16:27:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:16.930 16:27:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:16.930 16:27:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:16.930 16:27:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.930 16:27:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.930 [2024-11-28 16:27:08.559094] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:16.930 [2024-11-28 16:27:08.559145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:16.930 [2024-11-28 16:27:08.559158] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:16.930 [2024-11-28 16:27:08.559167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:16.930 [2024-11-28 16:27:08.559173] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:16.930 [2024-11-28 16:27:08.559185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:16.930 [2024-11-28 16:27:08.559191] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:16.930 [2024-11-28 16:27:08.559199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:16.930 16:27:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.930 16:27:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:16.930 16:27:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.930 16:27:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.930 16:27:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:16.930 16:27:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.930 16:27:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.930 16:27:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.930 16:27:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.930 16:27:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.930 16:27:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.930 16:27:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.930 16:27:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.930 16:27:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.930 16:27:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.930 16:27:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.930 16:27:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.930 "name": "Existed_Raid", 00:14:16.930 "uuid": "5feeb40c-370b-486d-80d9-871e6f056f95", 00:14:16.930 "strip_size_kb": 64, 00:14:16.930 "state": "configuring", 00:14:16.930 "raid_level": "raid5f", 00:14:16.930 "superblock": true, 00:14:16.930 "num_base_bdevs": 4, 00:14:16.930 "num_base_bdevs_discovered": 0, 00:14:16.930 "num_base_bdevs_operational": 4, 00:14:16.930 "base_bdevs_list": [ 00:14:16.930 { 00:14:16.930 "name": "BaseBdev1", 00:14:16.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.930 "is_configured": false, 00:14:16.930 "data_offset": 0, 00:14:16.930 "data_size": 0 00:14:16.930 }, 00:14:16.930 { 00:14:16.930 "name": "BaseBdev2", 00:14:16.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.930 "is_configured": false, 00:14:16.930 "data_offset": 0, 00:14:16.931 "data_size": 0 00:14:16.931 }, 00:14:16.931 { 00:14:16.931 "name": "BaseBdev3", 00:14:16.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.931 "is_configured": false, 00:14:16.931 "data_offset": 0, 00:14:16.931 "data_size": 0 00:14:16.931 }, 00:14:16.931 { 00:14:16.931 "name": "BaseBdev4", 00:14:16.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.931 "is_configured": false, 00:14:16.931 "data_offset": 0, 00:14:16.931 "data_size": 0 00:14:16.931 } 00:14:16.931 ] 00:14:16.931 }' 00:14:16.931 16:27:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.931 16:27:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.501 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:17.501 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.501 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.501 [2024-11-28 16:27:09.010202] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:17.501 [2024-11-28 16:27:09.010244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:14:17.501 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.501 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:17.501 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.501 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.501 [2024-11-28 16:27:09.022227] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:17.501 [2024-11-28 16:27:09.022263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:17.501 [2024-11-28 16:27:09.022271] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:17.501 [2024-11-28 16:27:09.022280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:17.501 [2024-11-28 16:27:09.022286] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:17.501 [2024-11-28 16:27:09.022295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:17.501 [2024-11-28 16:27:09.022300] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:17.501 [2024-11-28 16:27:09.022309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:17.501 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.501 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:17.501 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.501 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.501 [2024-11-28 16:27:09.049153] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:17.501 BaseBdev1 00:14:17.501 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.502 [ 00:14:17.502 { 00:14:17.502 "name": "BaseBdev1", 00:14:17.502 "aliases": [ 00:14:17.502 "a141e9d9-de47-4765-a4db-93e3602122a5" 00:14:17.502 ], 00:14:17.502 "product_name": "Malloc disk", 00:14:17.502 "block_size": 512, 00:14:17.502 "num_blocks": 65536, 00:14:17.502 "uuid": "a141e9d9-de47-4765-a4db-93e3602122a5", 00:14:17.502 "assigned_rate_limits": { 00:14:17.502 "rw_ios_per_sec": 0, 00:14:17.502 "rw_mbytes_per_sec": 0, 00:14:17.502 "r_mbytes_per_sec": 0, 00:14:17.502 "w_mbytes_per_sec": 0 00:14:17.502 }, 00:14:17.502 "claimed": true, 00:14:17.502 "claim_type": "exclusive_write", 00:14:17.502 "zoned": false, 00:14:17.502 "supported_io_types": { 00:14:17.502 "read": true, 00:14:17.502 "write": true, 00:14:17.502 "unmap": true, 00:14:17.502 "flush": true, 00:14:17.502 "reset": true, 00:14:17.502 "nvme_admin": false, 00:14:17.502 "nvme_io": false, 00:14:17.502 "nvme_io_md": false, 00:14:17.502 "write_zeroes": true, 00:14:17.502 "zcopy": true, 00:14:17.502 "get_zone_info": false, 00:14:17.502 "zone_management": false, 00:14:17.502 "zone_append": false, 00:14:17.502 "compare": false, 00:14:17.502 "compare_and_write": false, 00:14:17.502 "abort": true, 00:14:17.502 "seek_hole": false, 00:14:17.502 "seek_data": false, 00:14:17.502 "copy": true, 00:14:17.502 "nvme_iov_md": false 00:14:17.502 }, 00:14:17.502 "memory_domains": [ 00:14:17.502 { 00:14:17.502 "dma_device_id": "system", 00:14:17.502 "dma_device_type": 1 00:14:17.502 }, 00:14:17.502 { 00:14:17.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.502 "dma_device_type": 2 00:14:17.502 } 00:14:17.502 ], 00:14:17.502 "driver_specific": {} 00:14:17.502 } 00:14:17.502 ] 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.502 "name": "Existed_Raid", 00:14:17.502 "uuid": "0de19647-9083-4c92-bf22-fbe46dd3696f", 00:14:17.502 "strip_size_kb": 64, 00:14:17.502 "state": "configuring", 00:14:17.502 "raid_level": "raid5f", 00:14:17.502 "superblock": true, 00:14:17.502 "num_base_bdevs": 4, 00:14:17.502 "num_base_bdevs_discovered": 1, 00:14:17.502 "num_base_bdevs_operational": 4, 00:14:17.502 "base_bdevs_list": [ 00:14:17.502 { 00:14:17.502 "name": "BaseBdev1", 00:14:17.502 "uuid": "a141e9d9-de47-4765-a4db-93e3602122a5", 00:14:17.502 "is_configured": true, 00:14:17.502 "data_offset": 2048, 00:14:17.502 "data_size": 63488 00:14:17.502 }, 00:14:17.502 { 00:14:17.502 "name": "BaseBdev2", 00:14:17.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.502 "is_configured": false, 00:14:17.502 "data_offset": 0, 00:14:17.502 "data_size": 0 00:14:17.502 }, 00:14:17.502 { 00:14:17.502 "name": "BaseBdev3", 00:14:17.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.502 "is_configured": false, 00:14:17.502 "data_offset": 0, 00:14:17.502 "data_size": 0 00:14:17.502 }, 00:14:17.502 { 00:14:17.502 "name": "BaseBdev4", 00:14:17.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.502 "is_configured": false, 00:14:17.502 "data_offset": 0, 00:14:17.502 "data_size": 0 00:14:17.502 } 00:14:17.502 ] 00:14:17.502 }' 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.502 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.762 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:17.762 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.762 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.021 [2024-11-28 16:27:09.536376] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:18.021 [2024-11-28 16:27:09.536421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:14:18.021 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.021 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:18.021 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.021 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.021 [2024-11-28 16:27:09.548407] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:18.021 [2024-11-28 16:27:09.550427] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:18.021 [2024-11-28 16:27:09.550462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:18.021 [2024-11-28 16:27:09.550471] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:18.021 [2024-11-28 16:27:09.550479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:18.021 [2024-11-28 16:27:09.550485] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:18.021 [2024-11-28 16:27:09.550493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:18.021 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.021 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:18.021 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:18.021 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:18.021 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.021 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.021 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:18.021 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.021 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.021 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.021 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.021 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.021 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.021 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.021 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.021 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.021 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.021 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.021 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.021 "name": "Existed_Raid", 00:14:18.021 "uuid": "9bfb6109-703c-4f40-8f86-dd3fd776ad64", 00:14:18.021 "strip_size_kb": 64, 00:14:18.021 "state": "configuring", 00:14:18.021 "raid_level": "raid5f", 00:14:18.021 "superblock": true, 00:14:18.021 "num_base_bdevs": 4, 00:14:18.021 "num_base_bdevs_discovered": 1, 00:14:18.021 "num_base_bdevs_operational": 4, 00:14:18.021 "base_bdevs_list": [ 00:14:18.021 { 00:14:18.021 "name": "BaseBdev1", 00:14:18.021 "uuid": "a141e9d9-de47-4765-a4db-93e3602122a5", 00:14:18.022 "is_configured": true, 00:14:18.022 "data_offset": 2048, 00:14:18.022 "data_size": 63488 00:14:18.022 }, 00:14:18.022 { 00:14:18.022 "name": "BaseBdev2", 00:14:18.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.022 "is_configured": false, 00:14:18.022 "data_offset": 0, 00:14:18.022 "data_size": 0 00:14:18.022 }, 00:14:18.022 { 00:14:18.022 "name": "BaseBdev3", 00:14:18.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.022 "is_configured": false, 00:14:18.022 "data_offset": 0, 00:14:18.022 "data_size": 0 00:14:18.022 }, 00:14:18.022 { 00:14:18.022 "name": "BaseBdev4", 00:14:18.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.022 "is_configured": false, 00:14:18.022 "data_offset": 0, 00:14:18.022 "data_size": 0 00:14:18.022 } 00:14:18.022 ] 00:14:18.022 }' 00:14:18.022 16:27:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.022 16:27:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.282 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:18.282 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.282 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.541 [2024-11-28 16:27:10.054921] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:18.541 BaseBdev2 00:14:18.541 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.541 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.542 [ 00:14:18.542 { 00:14:18.542 "name": "BaseBdev2", 00:14:18.542 "aliases": [ 00:14:18.542 "78fb97da-32cd-4094-a7a0-1f9b467ea43d" 00:14:18.542 ], 00:14:18.542 "product_name": "Malloc disk", 00:14:18.542 "block_size": 512, 00:14:18.542 "num_blocks": 65536, 00:14:18.542 "uuid": "78fb97da-32cd-4094-a7a0-1f9b467ea43d", 00:14:18.542 "assigned_rate_limits": { 00:14:18.542 "rw_ios_per_sec": 0, 00:14:18.542 "rw_mbytes_per_sec": 0, 00:14:18.542 "r_mbytes_per_sec": 0, 00:14:18.542 "w_mbytes_per_sec": 0 00:14:18.542 }, 00:14:18.542 "claimed": true, 00:14:18.542 "claim_type": "exclusive_write", 00:14:18.542 "zoned": false, 00:14:18.542 "supported_io_types": { 00:14:18.542 "read": true, 00:14:18.542 "write": true, 00:14:18.542 "unmap": true, 00:14:18.542 "flush": true, 00:14:18.542 "reset": true, 00:14:18.542 "nvme_admin": false, 00:14:18.542 "nvme_io": false, 00:14:18.542 "nvme_io_md": false, 00:14:18.542 "write_zeroes": true, 00:14:18.542 "zcopy": true, 00:14:18.542 "get_zone_info": false, 00:14:18.542 "zone_management": false, 00:14:18.542 "zone_append": false, 00:14:18.542 "compare": false, 00:14:18.542 "compare_and_write": false, 00:14:18.542 "abort": true, 00:14:18.542 "seek_hole": false, 00:14:18.542 "seek_data": false, 00:14:18.542 "copy": true, 00:14:18.542 "nvme_iov_md": false 00:14:18.542 }, 00:14:18.542 "memory_domains": [ 00:14:18.542 { 00:14:18.542 "dma_device_id": "system", 00:14:18.542 "dma_device_type": 1 00:14:18.542 }, 00:14:18.542 { 00:14:18.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.542 "dma_device_type": 2 00:14:18.542 } 00:14:18.542 ], 00:14:18.542 "driver_specific": {} 00:14:18.542 } 00:14:18.542 ] 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.542 "name": "Existed_Raid", 00:14:18.542 "uuid": "9bfb6109-703c-4f40-8f86-dd3fd776ad64", 00:14:18.542 "strip_size_kb": 64, 00:14:18.542 "state": "configuring", 00:14:18.542 "raid_level": "raid5f", 00:14:18.542 "superblock": true, 00:14:18.542 "num_base_bdevs": 4, 00:14:18.542 "num_base_bdevs_discovered": 2, 00:14:18.542 "num_base_bdevs_operational": 4, 00:14:18.542 "base_bdevs_list": [ 00:14:18.542 { 00:14:18.542 "name": "BaseBdev1", 00:14:18.542 "uuid": "a141e9d9-de47-4765-a4db-93e3602122a5", 00:14:18.542 "is_configured": true, 00:14:18.542 "data_offset": 2048, 00:14:18.542 "data_size": 63488 00:14:18.542 }, 00:14:18.542 { 00:14:18.542 "name": "BaseBdev2", 00:14:18.542 "uuid": "78fb97da-32cd-4094-a7a0-1f9b467ea43d", 00:14:18.542 "is_configured": true, 00:14:18.542 "data_offset": 2048, 00:14:18.542 "data_size": 63488 00:14:18.542 }, 00:14:18.542 { 00:14:18.542 "name": "BaseBdev3", 00:14:18.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.542 "is_configured": false, 00:14:18.542 "data_offset": 0, 00:14:18.542 "data_size": 0 00:14:18.542 }, 00:14:18.542 { 00:14:18.542 "name": "BaseBdev4", 00:14:18.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.542 "is_configured": false, 00:14:18.542 "data_offset": 0, 00:14:18.542 "data_size": 0 00:14:18.542 } 00:14:18.542 ] 00:14:18.542 }' 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.542 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.802 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:18.802 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.802 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.802 [2024-11-28 16:27:10.550534] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:18.802 BaseBdev3 00:14:18.802 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.802 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:18.802 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:18.802 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:18.802 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:18.802 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:18.802 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:18.802 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:18.802 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.802 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.802 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.802 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:18.802 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.802 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.062 [ 00:14:19.062 { 00:14:19.062 "name": "BaseBdev3", 00:14:19.062 "aliases": [ 00:14:19.062 "5afe4791-4948-4a7f-8e2d-50dd98f15abd" 00:14:19.062 ], 00:14:19.062 "product_name": "Malloc disk", 00:14:19.062 "block_size": 512, 00:14:19.062 "num_blocks": 65536, 00:14:19.062 "uuid": "5afe4791-4948-4a7f-8e2d-50dd98f15abd", 00:14:19.062 "assigned_rate_limits": { 00:14:19.062 "rw_ios_per_sec": 0, 00:14:19.062 "rw_mbytes_per_sec": 0, 00:14:19.062 "r_mbytes_per_sec": 0, 00:14:19.062 "w_mbytes_per_sec": 0 00:14:19.062 }, 00:14:19.062 "claimed": true, 00:14:19.062 "claim_type": "exclusive_write", 00:14:19.062 "zoned": false, 00:14:19.062 "supported_io_types": { 00:14:19.062 "read": true, 00:14:19.062 "write": true, 00:14:19.062 "unmap": true, 00:14:19.062 "flush": true, 00:14:19.062 "reset": true, 00:14:19.062 "nvme_admin": false, 00:14:19.062 "nvme_io": false, 00:14:19.062 "nvme_io_md": false, 00:14:19.062 "write_zeroes": true, 00:14:19.062 "zcopy": true, 00:14:19.062 "get_zone_info": false, 00:14:19.062 "zone_management": false, 00:14:19.062 "zone_append": false, 00:14:19.062 "compare": false, 00:14:19.062 "compare_and_write": false, 00:14:19.062 "abort": true, 00:14:19.062 "seek_hole": false, 00:14:19.062 "seek_data": false, 00:14:19.062 "copy": true, 00:14:19.062 "nvme_iov_md": false 00:14:19.062 }, 00:14:19.062 "memory_domains": [ 00:14:19.062 { 00:14:19.062 "dma_device_id": "system", 00:14:19.062 "dma_device_type": 1 00:14:19.062 }, 00:14:19.062 { 00:14:19.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.062 "dma_device_type": 2 00:14:19.062 } 00:14:19.062 ], 00:14:19.062 "driver_specific": {} 00:14:19.062 } 00:14:19.062 ] 00:14:19.062 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.062 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:19.062 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:19.062 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:19.062 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:19.062 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.062 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.062 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.062 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.062 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.062 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.062 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.062 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.062 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.062 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.062 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.062 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.062 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.062 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.062 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.062 "name": "Existed_Raid", 00:14:19.062 "uuid": "9bfb6109-703c-4f40-8f86-dd3fd776ad64", 00:14:19.062 "strip_size_kb": 64, 00:14:19.062 "state": "configuring", 00:14:19.062 "raid_level": "raid5f", 00:14:19.062 "superblock": true, 00:14:19.062 "num_base_bdevs": 4, 00:14:19.062 "num_base_bdevs_discovered": 3, 00:14:19.062 "num_base_bdevs_operational": 4, 00:14:19.062 "base_bdevs_list": [ 00:14:19.062 { 00:14:19.062 "name": "BaseBdev1", 00:14:19.062 "uuid": "a141e9d9-de47-4765-a4db-93e3602122a5", 00:14:19.062 "is_configured": true, 00:14:19.062 "data_offset": 2048, 00:14:19.062 "data_size": 63488 00:14:19.062 }, 00:14:19.062 { 00:14:19.062 "name": "BaseBdev2", 00:14:19.062 "uuid": "78fb97da-32cd-4094-a7a0-1f9b467ea43d", 00:14:19.062 "is_configured": true, 00:14:19.062 "data_offset": 2048, 00:14:19.062 "data_size": 63488 00:14:19.062 }, 00:14:19.062 { 00:14:19.062 "name": "BaseBdev3", 00:14:19.062 "uuid": "5afe4791-4948-4a7f-8e2d-50dd98f15abd", 00:14:19.062 "is_configured": true, 00:14:19.062 "data_offset": 2048, 00:14:19.062 "data_size": 63488 00:14:19.062 }, 00:14:19.062 { 00:14:19.062 "name": "BaseBdev4", 00:14:19.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.062 "is_configured": false, 00:14:19.062 "data_offset": 0, 00:14:19.062 "data_size": 0 00:14:19.062 } 00:14:19.062 ] 00:14:19.062 }' 00:14:19.062 16:27:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.062 16:27:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.322 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:19.322 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.322 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.322 [2024-11-28 16:27:11.090585] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:19.322 [2024-11-28 16:27:11.090827] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:19.322 [2024-11-28 16:27:11.090860] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:19.322 [2024-11-28 16:27:11.091174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:19.322 BaseBdev4 00:14:19.322 [2024-11-28 16:27:11.091737] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:19.322 [2024-11-28 16:27:11.091771] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:14:19.322 [2024-11-28 16:27:11.091968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.322 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.322 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.582 [ 00:14:19.582 { 00:14:19.582 "name": "BaseBdev4", 00:14:19.582 "aliases": [ 00:14:19.582 "b4e0adda-b1c0-4539-b267-d864a1955c1c" 00:14:19.582 ], 00:14:19.582 "product_name": "Malloc disk", 00:14:19.582 "block_size": 512, 00:14:19.582 "num_blocks": 65536, 00:14:19.582 "uuid": "b4e0adda-b1c0-4539-b267-d864a1955c1c", 00:14:19.582 "assigned_rate_limits": { 00:14:19.582 "rw_ios_per_sec": 0, 00:14:19.582 "rw_mbytes_per_sec": 0, 00:14:19.582 "r_mbytes_per_sec": 0, 00:14:19.582 "w_mbytes_per_sec": 0 00:14:19.582 }, 00:14:19.582 "claimed": true, 00:14:19.582 "claim_type": "exclusive_write", 00:14:19.582 "zoned": false, 00:14:19.582 "supported_io_types": { 00:14:19.582 "read": true, 00:14:19.582 "write": true, 00:14:19.582 "unmap": true, 00:14:19.582 "flush": true, 00:14:19.582 "reset": true, 00:14:19.582 "nvme_admin": false, 00:14:19.582 "nvme_io": false, 00:14:19.582 "nvme_io_md": false, 00:14:19.582 "write_zeroes": true, 00:14:19.582 "zcopy": true, 00:14:19.582 "get_zone_info": false, 00:14:19.582 "zone_management": false, 00:14:19.582 "zone_append": false, 00:14:19.582 "compare": false, 00:14:19.582 "compare_and_write": false, 00:14:19.582 "abort": true, 00:14:19.582 "seek_hole": false, 00:14:19.582 "seek_data": false, 00:14:19.582 "copy": true, 00:14:19.582 "nvme_iov_md": false 00:14:19.582 }, 00:14:19.582 "memory_domains": [ 00:14:19.582 { 00:14:19.582 "dma_device_id": "system", 00:14:19.582 "dma_device_type": 1 00:14:19.582 }, 00:14:19.582 { 00:14:19.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.582 "dma_device_type": 2 00:14:19.582 } 00:14:19.582 ], 00:14:19.582 "driver_specific": {} 00:14:19.582 } 00:14:19.582 ] 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.582 "name": "Existed_Raid", 00:14:19.582 "uuid": "9bfb6109-703c-4f40-8f86-dd3fd776ad64", 00:14:19.582 "strip_size_kb": 64, 00:14:19.582 "state": "online", 00:14:19.582 "raid_level": "raid5f", 00:14:19.582 "superblock": true, 00:14:19.582 "num_base_bdevs": 4, 00:14:19.582 "num_base_bdevs_discovered": 4, 00:14:19.582 "num_base_bdevs_operational": 4, 00:14:19.582 "base_bdevs_list": [ 00:14:19.582 { 00:14:19.582 "name": "BaseBdev1", 00:14:19.582 "uuid": "a141e9d9-de47-4765-a4db-93e3602122a5", 00:14:19.582 "is_configured": true, 00:14:19.582 "data_offset": 2048, 00:14:19.582 "data_size": 63488 00:14:19.582 }, 00:14:19.582 { 00:14:19.582 "name": "BaseBdev2", 00:14:19.582 "uuid": "78fb97da-32cd-4094-a7a0-1f9b467ea43d", 00:14:19.582 "is_configured": true, 00:14:19.582 "data_offset": 2048, 00:14:19.582 "data_size": 63488 00:14:19.582 }, 00:14:19.582 { 00:14:19.582 "name": "BaseBdev3", 00:14:19.582 "uuid": "5afe4791-4948-4a7f-8e2d-50dd98f15abd", 00:14:19.582 "is_configured": true, 00:14:19.582 "data_offset": 2048, 00:14:19.582 "data_size": 63488 00:14:19.582 }, 00:14:19.582 { 00:14:19.582 "name": "BaseBdev4", 00:14:19.582 "uuid": "b4e0adda-b1c0-4539-b267-d864a1955c1c", 00:14:19.582 "is_configured": true, 00:14:19.582 "data_offset": 2048, 00:14:19.582 "data_size": 63488 00:14:19.582 } 00:14:19.582 ] 00:14:19.582 }' 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.582 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.842 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:19.842 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:19.842 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:19.842 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:19.842 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:19.842 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:19.842 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:19.842 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:19.842 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.842 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.842 [2024-11-28 16:27:11.586235] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:19.842 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.842 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:19.842 "name": "Existed_Raid", 00:14:19.842 "aliases": [ 00:14:19.842 "9bfb6109-703c-4f40-8f86-dd3fd776ad64" 00:14:19.842 ], 00:14:19.842 "product_name": "Raid Volume", 00:14:19.842 "block_size": 512, 00:14:19.842 "num_blocks": 190464, 00:14:19.842 "uuid": "9bfb6109-703c-4f40-8f86-dd3fd776ad64", 00:14:19.842 "assigned_rate_limits": { 00:14:19.842 "rw_ios_per_sec": 0, 00:14:19.842 "rw_mbytes_per_sec": 0, 00:14:19.842 "r_mbytes_per_sec": 0, 00:14:19.842 "w_mbytes_per_sec": 0 00:14:19.842 }, 00:14:19.842 "claimed": false, 00:14:19.842 "zoned": false, 00:14:19.842 "supported_io_types": { 00:14:19.842 "read": true, 00:14:19.842 "write": true, 00:14:19.842 "unmap": false, 00:14:19.842 "flush": false, 00:14:19.842 "reset": true, 00:14:19.842 "nvme_admin": false, 00:14:19.842 "nvme_io": false, 00:14:19.842 "nvme_io_md": false, 00:14:19.842 "write_zeroes": true, 00:14:19.842 "zcopy": false, 00:14:19.842 "get_zone_info": false, 00:14:19.842 "zone_management": false, 00:14:19.842 "zone_append": false, 00:14:19.842 "compare": false, 00:14:19.842 "compare_and_write": false, 00:14:19.842 "abort": false, 00:14:19.842 "seek_hole": false, 00:14:19.842 "seek_data": false, 00:14:19.842 "copy": false, 00:14:19.842 "nvme_iov_md": false 00:14:19.842 }, 00:14:19.842 "driver_specific": { 00:14:19.842 "raid": { 00:14:19.842 "uuid": "9bfb6109-703c-4f40-8f86-dd3fd776ad64", 00:14:19.842 "strip_size_kb": 64, 00:14:19.842 "state": "online", 00:14:19.842 "raid_level": "raid5f", 00:14:19.842 "superblock": true, 00:14:19.842 "num_base_bdevs": 4, 00:14:19.842 "num_base_bdevs_discovered": 4, 00:14:19.842 "num_base_bdevs_operational": 4, 00:14:19.842 "base_bdevs_list": [ 00:14:19.842 { 00:14:19.842 "name": "BaseBdev1", 00:14:19.842 "uuid": "a141e9d9-de47-4765-a4db-93e3602122a5", 00:14:19.842 "is_configured": true, 00:14:19.842 "data_offset": 2048, 00:14:19.842 "data_size": 63488 00:14:19.842 }, 00:14:19.842 { 00:14:19.842 "name": "BaseBdev2", 00:14:19.842 "uuid": "78fb97da-32cd-4094-a7a0-1f9b467ea43d", 00:14:19.842 "is_configured": true, 00:14:19.842 "data_offset": 2048, 00:14:19.842 "data_size": 63488 00:14:19.842 }, 00:14:19.842 { 00:14:19.842 "name": "BaseBdev3", 00:14:19.842 "uuid": "5afe4791-4948-4a7f-8e2d-50dd98f15abd", 00:14:19.842 "is_configured": true, 00:14:19.842 "data_offset": 2048, 00:14:19.842 "data_size": 63488 00:14:19.842 }, 00:14:19.842 { 00:14:19.842 "name": "BaseBdev4", 00:14:19.842 "uuid": "b4e0adda-b1c0-4539-b267-d864a1955c1c", 00:14:19.842 "is_configured": true, 00:14:19.842 "data_offset": 2048, 00:14:19.842 "data_size": 63488 00:14:19.842 } 00:14:19.842 ] 00:14:19.842 } 00:14:19.842 } 00:14:19.842 }' 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:20.102 BaseBdev2 00:14:20.102 BaseBdev3 00:14:20.102 BaseBdev4' 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.102 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.362 [2024-11-28 16:27:11.909485] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.362 "name": "Existed_Raid", 00:14:20.362 "uuid": "9bfb6109-703c-4f40-8f86-dd3fd776ad64", 00:14:20.362 "strip_size_kb": 64, 00:14:20.362 "state": "online", 00:14:20.362 "raid_level": "raid5f", 00:14:20.362 "superblock": true, 00:14:20.362 "num_base_bdevs": 4, 00:14:20.362 "num_base_bdevs_discovered": 3, 00:14:20.362 "num_base_bdevs_operational": 3, 00:14:20.362 "base_bdevs_list": [ 00:14:20.362 { 00:14:20.362 "name": null, 00:14:20.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.362 "is_configured": false, 00:14:20.362 "data_offset": 0, 00:14:20.362 "data_size": 63488 00:14:20.362 }, 00:14:20.362 { 00:14:20.362 "name": "BaseBdev2", 00:14:20.362 "uuid": "78fb97da-32cd-4094-a7a0-1f9b467ea43d", 00:14:20.362 "is_configured": true, 00:14:20.362 "data_offset": 2048, 00:14:20.362 "data_size": 63488 00:14:20.362 }, 00:14:20.362 { 00:14:20.362 "name": "BaseBdev3", 00:14:20.362 "uuid": "5afe4791-4948-4a7f-8e2d-50dd98f15abd", 00:14:20.362 "is_configured": true, 00:14:20.362 "data_offset": 2048, 00:14:20.362 "data_size": 63488 00:14:20.362 }, 00:14:20.362 { 00:14:20.362 "name": "BaseBdev4", 00:14:20.362 "uuid": "b4e0adda-b1c0-4539-b267-d864a1955c1c", 00:14:20.362 "is_configured": true, 00:14:20.362 "data_offset": 2048, 00:14:20.362 "data_size": 63488 00:14:20.362 } 00:14:20.362 ] 00:14:20.362 }' 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.362 16:27:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.622 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:20.622 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:20.622 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.622 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.622 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.622 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:20.622 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.882 [2024-11-28 16:27:12.409586] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:20.882 [2024-11-28 16:27:12.409749] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:20.882 [2024-11-28 16:27:12.430459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.882 [2024-11-28 16:27:12.490348] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.882 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.883 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:20.883 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:20.883 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:20.883 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.883 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.883 [2024-11-28 16:27:12.569959] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:20.883 [2024-11-28 16:27:12.570005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:14:20.883 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.883 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:20.883 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:20.883 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.883 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.883 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.883 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:20.883 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.883 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:20.883 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:20.883 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:20.883 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:20.883 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:20.883 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:20.883 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.883 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.143 BaseBdev2 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.143 [ 00:14:21.143 { 00:14:21.143 "name": "BaseBdev2", 00:14:21.143 "aliases": [ 00:14:21.143 "a654fbeb-fd50-466b-901d-7b58ad890728" 00:14:21.143 ], 00:14:21.143 "product_name": "Malloc disk", 00:14:21.143 "block_size": 512, 00:14:21.143 "num_blocks": 65536, 00:14:21.143 "uuid": "a654fbeb-fd50-466b-901d-7b58ad890728", 00:14:21.143 "assigned_rate_limits": { 00:14:21.143 "rw_ios_per_sec": 0, 00:14:21.143 "rw_mbytes_per_sec": 0, 00:14:21.143 "r_mbytes_per_sec": 0, 00:14:21.143 "w_mbytes_per_sec": 0 00:14:21.143 }, 00:14:21.143 "claimed": false, 00:14:21.143 "zoned": false, 00:14:21.143 "supported_io_types": { 00:14:21.143 "read": true, 00:14:21.143 "write": true, 00:14:21.143 "unmap": true, 00:14:21.143 "flush": true, 00:14:21.143 "reset": true, 00:14:21.143 "nvme_admin": false, 00:14:21.143 "nvme_io": false, 00:14:21.143 "nvme_io_md": false, 00:14:21.143 "write_zeroes": true, 00:14:21.143 "zcopy": true, 00:14:21.143 "get_zone_info": false, 00:14:21.143 "zone_management": false, 00:14:21.143 "zone_append": false, 00:14:21.143 "compare": false, 00:14:21.143 "compare_and_write": false, 00:14:21.143 "abort": true, 00:14:21.143 "seek_hole": false, 00:14:21.143 "seek_data": false, 00:14:21.143 "copy": true, 00:14:21.143 "nvme_iov_md": false 00:14:21.143 }, 00:14:21.143 "memory_domains": [ 00:14:21.143 { 00:14:21.143 "dma_device_id": "system", 00:14:21.143 "dma_device_type": 1 00:14:21.143 }, 00:14:21.143 { 00:14:21.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.143 "dma_device_type": 2 00:14:21.143 } 00:14:21.143 ], 00:14:21.143 "driver_specific": {} 00:14:21.143 } 00:14:21.143 ] 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.143 BaseBdev3 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.143 [ 00:14:21.143 { 00:14:21.143 "name": "BaseBdev3", 00:14:21.143 "aliases": [ 00:14:21.143 "d36a82c3-f33b-4fa6-87dc-c5011f902cd4" 00:14:21.143 ], 00:14:21.143 "product_name": "Malloc disk", 00:14:21.143 "block_size": 512, 00:14:21.143 "num_blocks": 65536, 00:14:21.143 "uuid": "d36a82c3-f33b-4fa6-87dc-c5011f902cd4", 00:14:21.143 "assigned_rate_limits": { 00:14:21.143 "rw_ios_per_sec": 0, 00:14:21.143 "rw_mbytes_per_sec": 0, 00:14:21.143 "r_mbytes_per_sec": 0, 00:14:21.143 "w_mbytes_per_sec": 0 00:14:21.143 }, 00:14:21.143 "claimed": false, 00:14:21.143 "zoned": false, 00:14:21.143 "supported_io_types": { 00:14:21.143 "read": true, 00:14:21.143 "write": true, 00:14:21.143 "unmap": true, 00:14:21.143 "flush": true, 00:14:21.143 "reset": true, 00:14:21.143 "nvme_admin": false, 00:14:21.143 "nvme_io": false, 00:14:21.143 "nvme_io_md": false, 00:14:21.143 "write_zeroes": true, 00:14:21.143 "zcopy": true, 00:14:21.143 "get_zone_info": false, 00:14:21.143 "zone_management": false, 00:14:21.143 "zone_append": false, 00:14:21.143 "compare": false, 00:14:21.143 "compare_and_write": false, 00:14:21.143 "abort": true, 00:14:21.143 "seek_hole": false, 00:14:21.143 "seek_data": false, 00:14:21.143 "copy": true, 00:14:21.143 "nvme_iov_md": false 00:14:21.143 }, 00:14:21.143 "memory_domains": [ 00:14:21.143 { 00:14:21.143 "dma_device_id": "system", 00:14:21.143 "dma_device_type": 1 00:14:21.143 }, 00:14:21.143 { 00:14:21.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.143 "dma_device_type": 2 00:14:21.143 } 00:14:21.143 ], 00:14:21.143 "driver_specific": {} 00:14:21.143 } 00:14:21.143 ] 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.143 BaseBdev4 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.143 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.143 [ 00:14:21.143 { 00:14:21.143 "name": "BaseBdev4", 00:14:21.143 "aliases": [ 00:14:21.143 "9d365ad7-4cdc-4d8e-8f25-3ef1b6db4441" 00:14:21.143 ], 00:14:21.143 "product_name": "Malloc disk", 00:14:21.143 "block_size": 512, 00:14:21.143 "num_blocks": 65536, 00:14:21.143 "uuid": "9d365ad7-4cdc-4d8e-8f25-3ef1b6db4441", 00:14:21.143 "assigned_rate_limits": { 00:14:21.143 "rw_ios_per_sec": 0, 00:14:21.143 "rw_mbytes_per_sec": 0, 00:14:21.143 "r_mbytes_per_sec": 0, 00:14:21.143 "w_mbytes_per_sec": 0 00:14:21.143 }, 00:14:21.143 "claimed": false, 00:14:21.143 "zoned": false, 00:14:21.143 "supported_io_types": { 00:14:21.143 "read": true, 00:14:21.143 "write": true, 00:14:21.143 "unmap": true, 00:14:21.143 "flush": true, 00:14:21.143 "reset": true, 00:14:21.143 "nvme_admin": false, 00:14:21.143 "nvme_io": false, 00:14:21.144 "nvme_io_md": false, 00:14:21.144 "write_zeroes": true, 00:14:21.144 "zcopy": true, 00:14:21.144 "get_zone_info": false, 00:14:21.144 "zone_management": false, 00:14:21.144 "zone_append": false, 00:14:21.144 "compare": false, 00:14:21.144 "compare_and_write": false, 00:14:21.144 "abort": true, 00:14:21.144 "seek_hole": false, 00:14:21.144 "seek_data": false, 00:14:21.144 "copy": true, 00:14:21.144 "nvme_iov_md": false 00:14:21.144 }, 00:14:21.144 "memory_domains": [ 00:14:21.144 { 00:14:21.144 "dma_device_id": "system", 00:14:21.144 "dma_device_type": 1 00:14:21.144 }, 00:14:21.144 { 00:14:21.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.144 "dma_device_type": 2 00:14:21.144 } 00:14:21.144 ], 00:14:21.144 "driver_specific": {} 00:14:21.144 } 00:14:21.144 ] 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.144 [2024-11-28 16:27:12.822259] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:21.144 [2024-11-28 16:27:12.822311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:21.144 [2024-11-28 16:27:12.822333] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:21.144 [2024-11-28 16:27:12.824394] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:21.144 [2024-11-28 16:27:12.824444] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.144 "name": "Existed_Raid", 00:14:21.144 "uuid": "87c18f8a-5cd6-4d50-bbd3-4e0cdadc3aff", 00:14:21.144 "strip_size_kb": 64, 00:14:21.144 "state": "configuring", 00:14:21.144 "raid_level": "raid5f", 00:14:21.144 "superblock": true, 00:14:21.144 "num_base_bdevs": 4, 00:14:21.144 "num_base_bdevs_discovered": 3, 00:14:21.144 "num_base_bdevs_operational": 4, 00:14:21.144 "base_bdevs_list": [ 00:14:21.144 { 00:14:21.144 "name": "BaseBdev1", 00:14:21.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.144 "is_configured": false, 00:14:21.144 "data_offset": 0, 00:14:21.144 "data_size": 0 00:14:21.144 }, 00:14:21.144 { 00:14:21.144 "name": "BaseBdev2", 00:14:21.144 "uuid": "a654fbeb-fd50-466b-901d-7b58ad890728", 00:14:21.144 "is_configured": true, 00:14:21.144 "data_offset": 2048, 00:14:21.144 "data_size": 63488 00:14:21.144 }, 00:14:21.144 { 00:14:21.144 "name": "BaseBdev3", 00:14:21.144 "uuid": "d36a82c3-f33b-4fa6-87dc-c5011f902cd4", 00:14:21.144 "is_configured": true, 00:14:21.144 "data_offset": 2048, 00:14:21.144 "data_size": 63488 00:14:21.144 }, 00:14:21.144 { 00:14:21.144 "name": "BaseBdev4", 00:14:21.144 "uuid": "9d365ad7-4cdc-4d8e-8f25-3ef1b6db4441", 00:14:21.144 "is_configured": true, 00:14:21.144 "data_offset": 2048, 00:14:21.144 "data_size": 63488 00:14:21.144 } 00:14:21.144 ] 00:14:21.144 }' 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.144 16:27:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.713 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:21.713 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.713 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.713 [2024-11-28 16:27:13.249475] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:21.713 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.713 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:21.713 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.713 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.713 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.713 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.713 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.713 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.713 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.713 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.713 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.713 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.713 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.713 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.713 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.713 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.713 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.713 "name": "Existed_Raid", 00:14:21.713 "uuid": "87c18f8a-5cd6-4d50-bbd3-4e0cdadc3aff", 00:14:21.713 "strip_size_kb": 64, 00:14:21.713 "state": "configuring", 00:14:21.713 "raid_level": "raid5f", 00:14:21.713 "superblock": true, 00:14:21.713 "num_base_bdevs": 4, 00:14:21.713 "num_base_bdevs_discovered": 2, 00:14:21.713 "num_base_bdevs_operational": 4, 00:14:21.713 "base_bdevs_list": [ 00:14:21.713 { 00:14:21.713 "name": "BaseBdev1", 00:14:21.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.713 "is_configured": false, 00:14:21.713 "data_offset": 0, 00:14:21.713 "data_size": 0 00:14:21.713 }, 00:14:21.713 { 00:14:21.713 "name": null, 00:14:21.713 "uuid": "a654fbeb-fd50-466b-901d-7b58ad890728", 00:14:21.713 "is_configured": false, 00:14:21.713 "data_offset": 0, 00:14:21.713 "data_size": 63488 00:14:21.713 }, 00:14:21.713 { 00:14:21.713 "name": "BaseBdev3", 00:14:21.713 "uuid": "d36a82c3-f33b-4fa6-87dc-c5011f902cd4", 00:14:21.713 "is_configured": true, 00:14:21.713 "data_offset": 2048, 00:14:21.713 "data_size": 63488 00:14:21.713 }, 00:14:21.713 { 00:14:21.713 "name": "BaseBdev4", 00:14:21.713 "uuid": "9d365ad7-4cdc-4d8e-8f25-3ef1b6db4441", 00:14:21.713 "is_configured": true, 00:14:21.713 "data_offset": 2048, 00:14:21.713 "data_size": 63488 00:14:21.713 } 00:14:21.713 ] 00:14:21.713 }' 00:14:21.713 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.713 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.973 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.973 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:21.973 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.973 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.973 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.232 [2024-11-28 16:27:13.765311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:22.232 BaseBdev1 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.232 [ 00:14:22.232 { 00:14:22.232 "name": "BaseBdev1", 00:14:22.232 "aliases": [ 00:14:22.232 "0004a49d-5e8e-4957-b957-86254bc1a0cd" 00:14:22.232 ], 00:14:22.232 "product_name": "Malloc disk", 00:14:22.232 "block_size": 512, 00:14:22.232 "num_blocks": 65536, 00:14:22.232 "uuid": "0004a49d-5e8e-4957-b957-86254bc1a0cd", 00:14:22.232 "assigned_rate_limits": { 00:14:22.232 "rw_ios_per_sec": 0, 00:14:22.232 "rw_mbytes_per_sec": 0, 00:14:22.232 "r_mbytes_per_sec": 0, 00:14:22.232 "w_mbytes_per_sec": 0 00:14:22.232 }, 00:14:22.232 "claimed": true, 00:14:22.232 "claim_type": "exclusive_write", 00:14:22.232 "zoned": false, 00:14:22.232 "supported_io_types": { 00:14:22.232 "read": true, 00:14:22.232 "write": true, 00:14:22.232 "unmap": true, 00:14:22.232 "flush": true, 00:14:22.232 "reset": true, 00:14:22.232 "nvme_admin": false, 00:14:22.232 "nvme_io": false, 00:14:22.232 "nvme_io_md": false, 00:14:22.232 "write_zeroes": true, 00:14:22.232 "zcopy": true, 00:14:22.232 "get_zone_info": false, 00:14:22.232 "zone_management": false, 00:14:22.232 "zone_append": false, 00:14:22.232 "compare": false, 00:14:22.232 "compare_and_write": false, 00:14:22.232 "abort": true, 00:14:22.232 "seek_hole": false, 00:14:22.232 "seek_data": false, 00:14:22.232 "copy": true, 00:14:22.232 "nvme_iov_md": false 00:14:22.232 }, 00:14:22.232 "memory_domains": [ 00:14:22.232 { 00:14:22.232 "dma_device_id": "system", 00:14:22.232 "dma_device_type": 1 00:14:22.232 }, 00:14:22.232 { 00:14:22.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.232 "dma_device_type": 2 00:14:22.232 } 00:14:22.232 ], 00:14:22.232 "driver_specific": {} 00:14:22.232 } 00:14:22.232 ] 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.232 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.232 "name": "Existed_Raid", 00:14:22.232 "uuid": "87c18f8a-5cd6-4d50-bbd3-4e0cdadc3aff", 00:14:22.232 "strip_size_kb": 64, 00:14:22.232 "state": "configuring", 00:14:22.232 "raid_level": "raid5f", 00:14:22.232 "superblock": true, 00:14:22.232 "num_base_bdevs": 4, 00:14:22.232 "num_base_bdevs_discovered": 3, 00:14:22.232 "num_base_bdevs_operational": 4, 00:14:22.232 "base_bdevs_list": [ 00:14:22.232 { 00:14:22.232 "name": "BaseBdev1", 00:14:22.232 "uuid": "0004a49d-5e8e-4957-b957-86254bc1a0cd", 00:14:22.232 "is_configured": true, 00:14:22.232 "data_offset": 2048, 00:14:22.232 "data_size": 63488 00:14:22.232 }, 00:14:22.232 { 00:14:22.232 "name": null, 00:14:22.232 "uuid": "a654fbeb-fd50-466b-901d-7b58ad890728", 00:14:22.232 "is_configured": false, 00:14:22.232 "data_offset": 0, 00:14:22.232 "data_size": 63488 00:14:22.232 }, 00:14:22.232 { 00:14:22.232 "name": "BaseBdev3", 00:14:22.233 "uuid": "d36a82c3-f33b-4fa6-87dc-c5011f902cd4", 00:14:22.233 "is_configured": true, 00:14:22.233 "data_offset": 2048, 00:14:22.233 "data_size": 63488 00:14:22.233 }, 00:14:22.233 { 00:14:22.233 "name": "BaseBdev4", 00:14:22.233 "uuid": "9d365ad7-4cdc-4d8e-8f25-3ef1b6db4441", 00:14:22.233 "is_configured": true, 00:14:22.233 "data_offset": 2048, 00:14:22.233 "data_size": 63488 00:14:22.233 } 00:14:22.233 ] 00:14:22.233 }' 00:14:22.233 16:27:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.233 16:27:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.493 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.493 16:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.493 16:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.493 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.752 [2024-11-28 16:27:14.308372] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.752 "name": "Existed_Raid", 00:14:22.752 "uuid": "87c18f8a-5cd6-4d50-bbd3-4e0cdadc3aff", 00:14:22.752 "strip_size_kb": 64, 00:14:22.752 "state": "configuring", 00:14:22.752 "raid_level": "raid5f", 00:14:22.752 "superblock": true, 00:14:22.752 "num_base_bdevs": 4, 00:14:22.752 "num_base_bdevs_discovered": 2, 00:14:22.752 "num_base_bdevs_operational": 4, 00:14:22.752 "base_bdevs_list": [ 00:14:22.752 { 00:14:22.752 "name": "BaseBdev1", 00:14:22.752 "uuid": "0004a49d-5e8e-4957-b957-86254bc1a0cd", 00:14:22.752 "is_configured": true, 00:14:22.752 "data_offset": 2048, 00:14:22.752 "data_size": 63488 00:14:22.752 }, 00:14:22.752 { 00:14:22.752 "name": null, 00:14:22.752 "uuid": "a654fbeb-fd50-466b-901d-7b58ad890728", 00:14:22.752 "is_configured": false, 00:14:22.752 "data_offset": 0, 00:14:22.752 "data_size": 63488 00:14:22.752 }, 00:14:22.752 { 00:14:22.752 "name": null, 00:14:22.752 "uuid": "d36a82c3-f33b-4fa6-87dc-c5011f902cd4", 00:14:22.752 "is_configured": false, 00:14:22.752 "data_offset": 0, 00:14:22.752 "data_size": 63488 00:14:22.752 }, 00:14:22.752 { 00:14:22.752 "name": "BaseBdev4", 00:14:22.752 "uuid": "9d365ad7-4cdc-4d8e-8f25-3ef1b6db4441", 00:14:22.752 "is_configured": true, 00:14:22.752 "data_offset": 2048, 00:14:22.752 "data_size": 63488 00:14:22.752 } 00:14:22.752 ] 00:14:22.752 }' 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.752 16:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.030 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.030 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:23.030 16:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.030 16:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.289 [2024-11-28 16:27:14.835626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.289 "name": "Existed_Raid", 00:14:23.289 "uuid": "87c18f8a-5cd6-4d50-bbd3-4e0cdadc3aff", 00:14:23.289 "strip_size_kb": 64, 00:14:23.289 "state": "configuring", 00:14:23.289 "raid_level": "raid5f", 00:14:23.289 "superblock": true, 00:14:23.289 "num_base_bdevs": 4, 00:14:23.289 "num_base_bdevs_discovered": 3, 00:14:23.289 "num_base_bdevs_operational": 4, 00:14:23.289 "base_bdevs_list": [ 00:14:23.289 { 00:14:23.289 "name": "BaseBdev1", 00:14:23.289 "uuid": "0004a49d-5e8e-4957-b957-86254bc1a0cd", 00:14:23.289 "is_configured": true, 00:14:23.289 "data_offset": 2048, 00:14:23.289 "data_size": 63488 00:14:23.289 }, 00:14:23.289 { 00:14:23.289 "name": null, 00:14:23.289 "uuid": "a654fbeb-fd50-466b-901d-7b58ad890728", 00:14:23.289 "is_configured": false, 00:14:23.289 "data_offset": 0, 00:14:23.289 "data_size": 63488 00:14:23.289 }, 00:14:23.289 { 00:14:23.289 "name": "BaseBdev3", 00:14:23.289 "uuid": "d36a82c3-f33b-4fa6-87dc-c5011f902cd4", 00:14:23.289 "is_configured": true, 00:14:23.289 "data_offset": 2048, 00:14:23.289 "data_size": 63488 00:14:23.289 }, 00:14:23.289 { 00:14:23.289 "name": "BaseBdev4", 00:14:23.289 "uuid": "9d365ad7-4cdc-4d8e-8f25-3ef1b6db4441", 00:14:23.289 "is_configured": true, 00:14:23.289 "data_offset": 2048, 00:14:23.289 "data_size": 63488 00:14:23.289 } 00:14:23.289 ] 00:14:23.289 }' 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.289 16:27:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.551 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.551 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:23.551 16:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.551 16:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.551 16:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.813 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:23.813 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:23.813 16:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.813 16:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.813 [2024-11-28 16:27:15.334760] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:23.813 16:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.813 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:23.813 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.813 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.813 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.813 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.813 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:23.813 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.813 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.813 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.813 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.813 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.813 16:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.813 16:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.813 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.813 16:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.813 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.813 "name": "Existed_Raid", 00:14:23.813 "uuid": "87c18f8a-5cd6-4d50-bbd3-4e0cdadc3aff", 00:14:23.813 "strip_size_kb": 64, 00:14:23.813 "state": "configuring", 00:14:23.813 "raid_level": "raid5f", 00:14:23.813 "superblock": true, 00:14:23.813 "num_base_bdevs": 4, 00:14:23.813 "num_base_bdevs_discovered": 2, 00:14:23.813 "num_base_bdevs_operational": 4, 00:14:23.813 "base_bdevs_list": [ 00:14:23.813 { 00:14:23.813 "name": null, 00:14:23.813 "uuid": "0004a49d-5e8e-4957-b957-86254bc1a0cd", 00:14:23.813 "is_configured": false, 00:14:23.813 "data_offset": 0, 00:14:23.813 "data_size": 63488 00:14:23.813 }, 00:14:23.813 { 00:14:23.813 "name": null, 00:14:23.813 "uuid": "a654fbeb-fd50-466b-901d-7b58ad890728", 00:14:23.813 "is_configured": false, 00:14:23.813 "data_offset": 0, 00:14:23.813 "data_size": 63488 00:14:23.813 }, 00:14:23.813 { 00:14:23.813 "name": "BaseBdev3", 00:14:23.813 "uuid": "d36a82c3-f33b-4fa6-87dc-c5011f902cd4", 00:14:23.813 "is_configured": true, 00:14:23.814 "data_offset": 2048, 00:14:23.814 "data_size": 63488 00:14:23.814 }, 00:14:23.814 { 00:14:23.814 "name": "BaseBdev4", 00:14:23.814 "uuid": "9d365ad7-4cdc-4d8e-8f25-3ef1b6db4441", 00:14:23.814 "is_configured": true, 00:14:23.814 "data_offset": 2048, 00:14:23.814 "data_size": 63488 00:14:23.814 } 00:14:23.814 ] 00:14:23.814 }' 00:14:23.814 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.814 16:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.073 [2024-11-28 16:27:15.753954] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.073 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.073 "name": "Existed_Raid", 00:14:24.073 "uuid": "87c18f8a-5cd6-4d50-bbd3-4e0cdadc3aff", 00:14:24.073 "strip_size_kb": 64, 00:14:24.073 "state": "configuring", 00:14:24.073 "raid_level": "raid5f", 00:14:24.073 "superblock": true, 00:14:24.073 "num_base_bdevs": 4, 00:14:24.073 "num_base_bdevs_discovered": 3, 00:14:24.073 "num_base_bdevs_operational": 4, 00:14:24.073 "base_bdevs_list": [ 00:14:24.073 { 00:14:24.073 "name": null, 00:14:24.074 "uuid": "0004a49d-5e8e-4957-b957-86254bc1a0cd", 00:14:24.074 "is_configured": false, 00:14:24.074 "data_offset": 0, 00:14:24.074 "data_size": 63488 00:14:24.074 }, 00:14:24.074 { 00:14:24.074 "name": "BaseBdev2", 00:14:24.074 "uuid": "a654fbeb-fd50-466b-901d-7b58ad890728", 00:14:24.074 "is_configured": true, 00:14:24.074 "data_offset": 2048, 00:14:24.074 "data_size": 63488 00:14:24.074 }, 00:14:24.074 { 00:14:24.074 "name": "BaseBdev3", 00:14:24.074 "uuid": "d36a82c3-f33b-4fa6-87dc-c5011f902cd4", 00:14:24.074 "is_configured": true, 00:14:24.074 "data_offset": 2048, 00:14:24.074 "data_size": 63488 00:14:24.074 }, 00:14:24.074 { 00:14:24.074 "name": "BaseBdev4", 00:14:24.074 "uuid": "9d365ad7-4cdc-4d8e-8f25-3ef1b6db4441", 00:14:24.074 "is_configured": true, 00:14:24.074 "data_offset": 2048, 00:14:24.074 "data_size": 63488 00:14:24.074 } 00:14:24.074 ] 00:14:24.074 }' 00:14:24.074 16:27:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.074 16:27:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.681 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.681 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:24.681 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.681 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.681 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.681 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:24.681 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0004a49d-5e8e-4957-b957-86254bc1a0cd 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.682 [2024-11-28 16:27:16.340278] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:24.682 [2024-11-28 16:27:16.340537] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:24.682 [2024-11-28 16:27:16.340575] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:24.682 [2024-11-28 16:27:16.340896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:24.682 NewBaseBdev 00:14:24.682 [2024-11-28 16:27:16.341407] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:24.682 [2024-11-28 16:27:16.341463] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:14:24.682 [2024-11-28 16:27:16.341601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.682 [ 00:14:24.682 { 00:14:24.682 "name": "NewBaseBdev", 00:14:24.682 "aliases": [ 00:14:24.682 "0004a49d-5e8e-4957-b957-86254bc1a0cd" 00:14:24.682 ], 00:14:24.682 "product_name": "Malloc disk", 00:14:24.682 "block_size": 512, 00:14:24.682 "num_blocks": 65536, 00:14:24.682 "uuid": "0004a49d-5e8e-4957-b957-86254bc1a0cd", 00:14:24.682 "assigned_rate_limits": { 00:14:24.682 "rw_ios_per_sec": 0, 00:14:24.682 "rw_mbytes_per_sec": 0, 00:14:24.682 "r_mbytes_per_sec": 0, 00:14:24.682 "w_mbytes_per_sec": 0 00:14:24.682 }, 00:14:24.682 "claimed": true, 00:14:24.682 "claim_type": "exclusive_write", 00:14:24.682 "zoned": false, 00:14:24.682 "supported_io_types": { 00:14:24.682 "read": true, 00:14:24.682 "write": true, 00:14:24.682 "unmap": true, 00:14:24.682 "flush": true, 00:14:24.682 "reset": true, 00:14:24.682 "nvme_admin": false, 00:14:24.682 "nvme_io": false, 00:14:24.682 "nvme_io_md": false, 00:14:24.682 "write_zeroes": true, 00:14:24.682 "zcopy": true, 00:14:24.682 "get_zone_info": false, 00:14:24.682 "zone_management": false, 00:14:24.682 "zone_append": false, 00:14:24.682 "compare": false, 00:14:24.682 "compare_and_write": false, 00:14:24.682 "abort": true, 00:14:24.682 "seek_hole": false, 00:14:24.682 "seek_data": false, 00:14:24.682 "copy": true, 00:14:24.682 "nvme_iov_md": false 00:14:24.682 }, 00:14:24.682 "memory_domains": [ 00:14:24.682 { 00:14:24.682 "dma_device_id": "system", 00:14:24.682 "dma_device_type": 1 00:14:24.682 }, 00:14:24.682 { 00:14:24.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.682 "dma_device_type": 2 00:14:24.682 } 00:14:24.682 ], 00:14:24.682 "driver_specific": {} 00:14:24.682 } 00:14:24.682 ] 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.682 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.682 "name": "Existed_Raid", 00:14:24.682 "uuid": "87c18f8a-5cd6-4d50-bbd3-4e0cdadc3aff", 00:14:24.682 "strip_size_kb": 64, 00:14:24.682 "state": "online", 00:14:24.682 "raid_level": "raid5f", 00:14:24.682 "superblock": true, 00:14:24.682 "num_base_bdevs": 4, 00:14:24.682 "num_base_bdevs_discovered": 4, 00:14:24.682 "num_base_bdevs_operational": 4, 00:14:24.682 "base_bdevs_list": [ 00:14:24.682 { 00:14:24.682 "name": "NewBaseBdev", 00:14:24.682 "uuid": "0004a49d-5e8e-4957-b957-86254bc1a0cd", 00:14:24.682 "is_configured": true, 00:14:24.682 "data_offset": 2048, 00:14:24.682 "data_size": 63488 00:14:24.682 }, 00:14:24.682 { 00:14:24.682 "name": "BaseBdev2", 00:14:24.682 "uuid": "a654fbeb-fd50-466b-901d-7b58ad890728", 00:14:24.682 "is_configured": true, 00:14:24.682 "data_offset": 2048, 00:14:24.682 "data_size": 63488 00:14:24.682 }, 00:14:24.682 { 00:14:24.682 "name": "BaseBdev3", 00:14:24.682 "uuid": "d36a82c3-f33b-4fa6-87dc-c5011f902cd4", 00:14:24.682 "is_configured": true, 00:14:24.682 "data_offset": 2048, 00:14:24.682 "data_size": 63488 00:14:24.682 }, 00:14:24.682 { 00:14:24.682 "name": "BaseBdev4", 00:14:24.682 "uuid": "9d365ad7-4cdc-4d8e-8f25-3ef1b6db4441", 00:14:24.682 "is_configured": true, 00:14:24.682 "data_offset": 2048, 00:14:24.683 "data_size": 63488 00:14:24.683 } 00:14:24.683 ] 00:14:24.683 }' 00:14:24.683 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.683 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.249 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:25.249 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:25.249 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:25.249 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:25.249 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:25.249 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:25.249 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:25.249 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:25.249 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.249 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.249 [2024-11-28 16:27:16.835630] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:25.249 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.249 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:25.249 "name": "Existed_Raid", 00:14:25.249 "aliases": [ 00:14:25.249 "87c18f8a-5cd6-4d50-bbd3-4e0cdadc3aff" 00:14:25.249 ], 00:14:25.249 "product_name": "Raid Volume", 00:14:25.249 "block_size": 512, 00:14:25.249 "num_blocks": 190464, 00:14:25.249 "uuid": "87c18f8a-5cd6-4d50-bbd3-4e0cdadc3aff", 00:14:25.249 "assigned_rate_limits": { 00:14:25.249 "rw_ios_per_sec": 0, 00:14:25.249 "rw_mbytes_per_sec": 0, 00:14:25.249 "r_mbytes_per_sec": 0, 00:14:25.249 "w_mbytes_per_sec": 0 00:14:25.249 }, 00:14:25.249 "claimed": false, 00:14:25.249 "zoned": false, 00:14:25.249 "supported_io_types": { 00:14:25.249 "read": true, 00:14:25.249 "write": true, 00:14:25.249 "unmap": false, 00:14:25.249 "flush": false, 00:14:25.249 "reset": true, 00:14:25.249 "nvme_admin": false, 00:14:25.249 "nvme_io": false, 00:14:25.249 "nvme_io_md": false, 00:14:25.249 "write_zeroes": true, 00:14:25.249 "zcopy": false, 00:14:25.249 "get_zone_info": false, 00:14:25.249 "zone_management": false, 00:14:25.249 "zone_append": false, 00:14:25.249 "compare": false, 00:14:25.249 "compare_and_write": false, 00:14:25.249 "abort": false, 00:14:25.249 "seek_hole": false, 00:14:25.249 "seek_data": false, 00:14:25.249 "copy": false, 00:14:25.249 "nvme_iov_md": false 00:14:25.249 }, 00:14:25.249 "driver_specific": { 00:14:25.249 "raid": { 00:14:25.249 "uuid": "87c18f8a-5cd6-4d50-bbd3-4e0cdadc3aff", 00:14:25.249 "strip_size_kb": 64, 00:14:25.249 "state": "online", 00:14:25.249 "raid_level": "raid5f", 00:14:25.249 "superblock": true, 00:14:25.249 "num_base_bdevs": 4, 00:14:25.249 "num_base_bdevs_discovered": 4, 00:14:25.249 "num_base_bdevs_operational": 4, 00:14:25.249 "base_bdevs_list": [ 00:14:25.249 { 00:14:25.249 "name": "NewBaseBdev", 00:14:25.249 "uuid": "0004a49d-5e8e-4957-b957-86254bc1a0cd", 00:14:25.249 "is_configured": true, 00:14:25.249 "data_offset": 2048, 00:14:25.249 "data_size": 63488 00:14:25.249 }, 00:14:25.249 { 00:14:25.249 "name": "BaseBdev2", 00:14:25.249 "uuid": "a654fbeb-fd50-466b-901d-7b58ad890728", 00:14:25.249 "is_configured": true, 00:14:25.249 "data_offset": 2048, 00:14:25.249 "data_size": 63488 00:14:25.249 }, 00:14:25.249 { 00:14:25.249 "name": "BaseBdev3", 00:14:25.249 "uuid": "d36a82c3-f33b-4fa6-87dc-c5011f902cd4", 00:14:25.249 "is_configured": true, 00:14:25.249 "data_offset": 2048, 00:14:25.249 "data_size": 63488 00:14:25.249 }, 00:14:25.249 { 00:14:25.249 "name": "BaseBdev4", 00:14:25.249 "uuid": "9d365ad7-4cdc-4d8e-8f25-3ef1b6db4441", 00:14:25.249 "is_configured": true, 00:14:25.249 "data_offset": 2048, 00:14:25.249 "data_size": 63488 00:14:25.249 } 00:14:25.249 ] 00:14:25.249 } 00:14:25.249 } 00:14:25.249 }' 00:14:25.249 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:25.249 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:25.249 BaseBdev2 00:14:25.249 BaseBdev3 00:14:25.249 BaseBdev4' 00:14:25.249 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.249 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:25.249 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.249 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:25.249 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.249 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.249 16:27:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.249 16:27:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.249 16:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.249 16:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.249 16:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.249 16:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:25.249 16:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.249 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.249 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.507 [2024-11-28 16:27:17.166886] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:25.507 [2024-11-28 16:27:17.166912] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.507 [2024-11-28 16:27:17.166977] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.507 [2024-11-28 16:27:17.167256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.507 [2024-11-28 16:27:17.167267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 93909 00:14:25.507 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 93909 ']' 00:14:25.508 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 93909 00:14:25.508 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:25.508 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:25.508 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93909 00:14:25.508 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:25.508 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:25.508 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93909' 00:14:25.508 killing process with pid 93909 00:14:25.508 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 93909 00:14:25.508 [2024-11-28 16:27:17.208093] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:25.508 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 93909 00:14:25.766 [2024-11-28 16:27:17.285652] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:26.025 16:27:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:26.025 00:14:26.025 real 0m10.025s 00:14:26.025 user 0m16.782s 00:14:26.025 sys 0m2.256s 00:14:26.025 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:26.025 16:27:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.025 ************************************ 00:14:26.025 END TEST raid5f_state_function_test_sb 00:14:26.025 ************************************ 00:14:26.025 16:27:17 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:14:26.025 16:27:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:26.025 16:27:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:26.025 16:27:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:26.025 ************************************ 00:14:26.025 START TEST raid5f_superblock_test 00:14:26.025 ************************************ 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=94564 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 94564 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 94564 ']' 00:14:26.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:26.025 16:27:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.284 [2024-11-28 16:27:17.842320] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:26.284 [2024-11-28 16:27:17.842554] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94564 ] 00:14:26.284 [2024-11-28 16:27:18.006330] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.543 [2024-11-28 16:27:18.078332] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.543 [2024-11-28 16:27:18.154169] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.543 [2024-11-28 16:27:18.154284] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.125 malloc1 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.125 [2024-11-28 16:27:18.688340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:27.125 [2024-11-28 16:27:18.688491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.125 [2024-11-28 16:27:18.688539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:27.125 [2024-11-28 16:27:18.688583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.125 [2024-11-28 16:27:18.690913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.125 [2024-11-28 16:27:18.690990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:27.125 pt1 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.125 malloc2 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.125 [2024-11-28 16:27:18.740034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:27.125 [2024-11-28 16:27:18.740240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.125 [2024-11-28 16:27:18.740287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:27.125 [2024-11-28 16:27:18.740315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.125 [2024-11-28 16:27:18.745311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.125 [2024-11-28 16:27:18.745365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:27.125 pt2 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.125 malloc3 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.125 [2024-11-28 16:27:18.776233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:27.125 [2024-11-28 16:27:18.776331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.125 [2024-11-28 16:27:18.776366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:27.125 [2024-11-28 16:27:18.776394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.125 [2024-11-28 16:27:18.778710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.125 [2024-11-28 16:27:18.778779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:27.125 pt3 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:27.125 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.126 malloc4 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.126 [2024-11-28 16:27:18.814588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:27.126 [2024-11-28 16:27:18.814679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.126 [2024-11-28 16:27:18.814709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:27.126 [2024-11-28 16:27:18.814741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.126 [2024-11-28 16:27:18.817081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.126 [2024-11-28 16:27:18.817150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:27.126 pt4 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.126 [2024-11-28 16:27:18.826656] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:27.126 [2024-11-28 16:27:18.828778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:27.126 [2024-11-28 16:27:18.828882] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:27.126 [2024-11-28 16:27:18.828965] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:27.126 [2024-11-28 16:27:18.829172] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:27.126 [2024-11-28 16:27:18.829218] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:27.126 [2024-11-28 16:27:18.829485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:27.126 [2024-11-28 16:27:18.829986] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:27.126 [2024-11-28 16:27:18.830028] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:27.126 [2024-11-28 16:27:18.830177] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.126 "name": "raid_bdev1", 00:14:27.126 "uuid": "eadc0186-8c25-4538-a72a-58322c59b276", 00:14:27.126 "strip_size_kb": 64, 00:14:27.126 "state": "online", 00:14:27.126 "raid_level": "raid5f", 00:14:27.126 "superblock": true, 00:14:27.126 "num_base_bdevs": 4, 00:14:27.126 "num_base_bdevs_discovered": 4, 00:14:27.126 "num_base_bdevs_operational": 4, 00:14:27.126 "base_bdevs_list": [ 00:14:27.126 { 00:14:27.126 "name": "pt1", 00:14:27.126 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:27.126 "is_configured": true, 00:14:27.126 "data_offset": 2048, 00:14:27.126 "data_size": 63488 00:14:27.126 }, 00:14:27.126 { 00:14:27.126 "name": "pt2", 00:14:27.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:27.126 "is_configured": true, 00:14:27.126 "data_offset": 2048, 00:14:27.126 "data_size": 63488 00:14:27.126 }, 00:14:27.126 { 00:14:27.126 "name": "pt3", 00:14:27.126 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:27.126 "is_configured": true, 00:14:27.126 "data_offset": 2048, 00:14:27.126 "data_size": 63488 00:14:27.126 }, 00:14:27.126 { 00:14:27.126 "name": "pt4", 00:14:27.126 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:27.126 "is_configured": true, 00:14:27.126 "data_offset": 2048, 00:14:27.126 "data_size": 63488 00:14:27.126 } 00:14:27.126 ] 00:14:27.126 }' 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.126 16:27:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.731 [2024-11-28 16:27:19.284503] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:27.731 "name": "raid_bdev1", 00:14:27.731 "aliases": [ 00:14:27.731 "eadc0186-8c25-4538-a72a-58322c59b276" 00:14:27.731 ], 00:14:27.731 "product_name": "Raid Volume", 00:14:27.731 "block_size": 512, 00:14:27.731 "num_blocks": 190464, 00:14:27.731 "uuid": "eadc0186-8c25-4538-a72a-58322c59b276", 00:14:27.731 "assigned_rate_limits": { 00:14:27.731 "rw_ios_per_sec": 0, 00:14:27.731 "rw_mbytes_per_sec": 0, 00:14:27.731 "r_mbytes_per_sec": 0, 00:14:27.731 "w_mbytes_per_sec": 0 00:14:27.731 }, 00:14:27.731 "claimed": false, 00:14:27.731 "zoned": false, 00:14:27.731 "supported_io_types": { 00:14:27.731 "read": true, 00:14:27.731 "write": true, 00:14:27.731 "unmap": false, 00:14:27.731 "flush": false, 00:14:27.731 "reset": true, 00:14:27.731 "nvme_admin": false, 00:14:27.731 "nvme_io": false, 00:14:27.731 "nvme_io_md": false, 00:14:27.731 "write_zeroes": true, 00:14:27.731 "zcopy": false, 00:14:27.731 "get_zone_info": false, 00:14:27.731 "zone_management": false, 00:14:27.731 "zone_append": false, 00:14:27.731 "compare": false, 00:14:27.731 "compare_and_write": false, 00:14:27.731 "abort": false, 00:14:27.731 "seek_hole": false, 00:14:27.731 "seek_data": false, 00:14:27.731 "copy": false, 00:14:27.731 "nvme_iov_md": false 00:14:27.731 }, 00:14:27.731 "driver_specific": { 00:14:27.731 "raid": { 00:14:27.731 "uuid": "eadc0186-8c25-4538-a72a-58322c59b276", 00:14:27.731 "strip_size_kb": 64, 00:14:27.731 "state": "online", 00:14:27.731 "raid_level": "raid5f", 00:14:27.731 "superblock": true, 00:14:27.731 "num_base_bdevs": 4, 00:14:27.731 "num_base_bdevs_discovered": 4, 00:14:27.731 "num_base_bdevs_operational": 4, 00:14:27.731 "base_bdevs_list": [ 00:14:27.731 { 00:14:27.731 "name": "pt1", 00:14:27.731 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:27.731 "is_configured": true, 00:14:27.731 "data_offset": 2048, 00:14:27.731 "data_size": 63488 00:14:27.731 }, 00:14:27.731 { 00:14:27.731 "name": "pt2", 00:14:27.731 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:27.731 "is_configured": true, 00:14:27.731 "data_offset": 2048, 00:14:27.731 "data_size": 63488 00:14:27.731 }, 00:14:27.731 { 00:14:27.731 "name": "pt3", 00:14:27.731 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:27.731 "is_configured": true, 00:14:27.731 "data_offset": 2048, 00:14:27.731 "data_size": 63488 00:14:27.731 }, 00:14:27.731 { 00:14:27.731 "name": "pt4", 00:14:27.731 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:27.731 "is_configured": true, 00:14:27.731 "data_offset": 2048, 00:14:27.731 "data_size": 63488 00:14:27.731 } 00:14:27.731 ] 00:14:27.731 } 00:14:27.731 } 00:14:27.731 }' 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:27.731 pt2 00:14:27.731 pt3 00:14:27.731 pt4' 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.731 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.732 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.732 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.732 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.732 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.732 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.732 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:27.732 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.732 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.732 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.991 [2024-11-28 16:27:19.608128] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=eadc0186-8c25-4538-a72a-58322c59b276 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z eadc0186-8c25-4538-a72a-58322c59b276 ']' 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.991 [2024-11-28 16:27:19.651861] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:27.991 [2024-11-28 16:27:19.651896] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.991 [2024-11-28 16:27:19.651981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.991 [2024-11-28 16:27:19.652060] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.991 [2024-11-28 16:27:19.652070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.991 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.251 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.251 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:28.251 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:28.251 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:28.251 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:28.251 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:28.251 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.251 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:28.251 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.251 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:28.251 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.251 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.251 [2024-11-28 16:27:19.815632] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:28.252 [2024-11-28 16:27:19.817797] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:28.252 [2024-11-28 16:27:19.817857] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:28.252 [2024-11-28 16:27:19.817887] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:28.252 [2024-11-28 16:27:19.817931] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:28.252 [2024-11-28 16:27:19.817968] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:28.252 [2024-11-28 16:27:19.817988] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:28.252 [2024-11-28 16:27:19.818003] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:28.252 [2024-11-28 16:27:19.818017] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:28.252 [2024-11-28 16:27:19.818027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:14:28.252 request: 00:14:28.252 { 00:14:28.252 "name": "raid_bdev1", 00:14:28.252 "raid_level": "raid5f", 00:14:28.252 "base_bdevs": [ 00:14:28.252 "malloc1", 00:14:28.252 "malloc2", 00:14:28.252 "malloc3", 00:14:28.252 "malloc4" 00:14:28.252 ], 00:14:28.252 "strip_size_kb": 64, 00:14:28.252 "superblock": false, 00:14:28.252 "method": "bdev_raid_create", 00:14:28.252 "req_id": 1 00:14:28.252 } 00:14:28.252 Got JSON-RPC error response 00:14:28.252 response: 00:14:28.252 { 00:14:28.252 "code": -17, 00:14:28.252 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:28.252 } 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.252 [2024-11-28 16:27:19.883457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:28.252 [2024-11-28 16:27:19.883536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.252 [2024-11-28 16:27:19.883571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:28.252 [2024-11-28 16:27:19.883615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.252 [2024-11-28 16:27:19.885932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.252 [2024-11-28 16:27:19.885993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:28.252 [2024-11-28 16:27:19.886074] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:28.252 [2024-11-28 16:27:19.886141] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:28.252 pt1 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.252 "name": "raid_bdev1", 00:14:28.252 "uuid": "eadc0186-8c25-4538-a72a-58322c59b276", 00:14:28.252 "strip_size_kb": 64, 00:14:28.252 "state": "configuring", 00:14:28.252 "raid_level": "raid5f", 00:14:28.252 "superblock": true, 00:14:28.252 "num_base_bdevs": 4, 00:14:28.252 "num_base_bdevs_discovered": 1, 00:14:28.252 "num_base_bdevs_operational": 4, 00:14:28.252 "base_bdevs_list": [ 00:14:28.252 { 00:14:28.252 "name": "pt1", 00:14:28.252 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.252 "is_configured": true, 00:14:28.252 "data_offset": 2048, 00:14:28.252 "data_size": 63488 00:14:28.252 }, 00:14:28.252 { 00:14:28.252 "name": null, 00:14:28.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.252 "is_configured": false, 00:14:28.252 "data_offset": 2048, 00:14:28.252 "data_size": 63488 00:14:28.252 }, 00:14:28.252 { 00:14:28.252 "name": null, 00:14:28.252 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:28.252 "is_configured": false, 00:14:28.252 "data_offset": 2048, 00:14:28.252 "data_size": 63488 00:14:28.252 }, 00:14:28.252 { 00:14:28.252 "name": null, 00:14:28.252 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:28.252 "is_configured": false, 00:14:28.252 "data_offset": 2048, 00:14:28.252 "data_size": 63488 00:14:28.252 } 00:14:28.252 ] 00:14:28.252 }' 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.252 16:27:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.821 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:28.821 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:28.821 16:27:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.821 16:27:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.821 [2024-11-28 16:27:20.322694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:28.821 [2024-11-28 16:27:20.322777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.821 [2024-11-28 16:27:20.322797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:28.821 [2024-11-28 16:27:20.322805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.821 [2024-11-28 16:27:20.323159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.821 [2024-11-28 16:27:20.323175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:28.821 [2024-11-28 16:27:20.323228] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:28.821 [2024-11-28 16:27:20.323248] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:28.821 pt2 00:14:28.821 16:27:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.821 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:28.821 16:27:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.821 16:27:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.821 [2024-11-28 16:27:20.330699] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:28.821 16:27:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.822 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:28.822 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.822 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.822 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.822 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.822 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.822 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.822 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.822 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.822 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.822 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.822 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.822 16:27:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.822 16:27:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.822 16:27:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.822 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.822 "name": "raid_bdev1", 00:14:28.822 "uuid": "eadc0186-8c25-4538-a72a-58322c59b276", 00:14:28.822 "strip_size_kb": 64, 00:14:28.822 "state": "configuring", 00:14:28.822 "raid_level": "raid5f", 00:14:28.822 "superblock": true, 00:14:28.822 "num_base_bdevs": 4, 00:14:28.822 "num_base_bdevs_discovered": 1, 00:14:28.822 "num_base_bdevs_operational": 4, 00:14:28.822 "base_bdevs_list": [ 00:14:28.822 { 00:14:28.822 "name": "pt1", 00:14:28.822 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.822 "is_configured": true, 00:14:28.822 "data_offset": 2048, 00:14:28.822 "data_size": 63488 00:14:28.822 }, 00:14:28.822 { 00:14:28.822 "name": null, 00:14:28.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.822 "is_configured": false, 00:14:28.822 "data_offset": 0, 00:14:28.822 "data_size": 63488 00:14:28.822 }, 00:14:28.822 { 00:14:28.822 "name": null, 00:14:28.822 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:28.822 "is_configured": false, 00:14:28.822 "data_offset": 2048, 00:14:28.822 "data_size": 63488 00:14:28.822 }, 00:14:28.822 { 00:14:28.822 "name": null, 00:14:28.822 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:28.822 "is_configured": false, 00:14:28.822 "data_offset": 2048, 00:14:28.822 "data_size": 63488 00:14:28.822 } 00:14:28.822 ] 00:14:28.822 }' 00:14:28.822 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.822 16:27:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.081 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:29.081 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:29.081 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:29.081 16:27:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.081 16:27:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.081 [2024-11-28 16:27:20.805892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:29.081 [2024-11-28 16:27:20.805985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.081 [2024-11-28 16:27:20.806014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:29.081 [2024-11-28 16:27:20.806040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.081 [2024-11-28 16:27:20.806408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.081 [2024-11-28 16:27:20.806463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:29.081 [2024-11-28 16:27:20.806540] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:29.081 [2024-11-28 16:27:20.806590] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:29.081 pt2 00:14:29.081 16:27:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.081 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:29.081 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:29.081 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:29.081 16:27:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.081 16:27:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.081 [2024-11-28 16:27:20.817826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:29.081 [2024-11-28 16:27:20.817889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.081 [2024-11-28 16:27:20.817905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:29.081 [2024-11-28 16:27:20.817915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.081 [2024-11-28 16:27:20.818234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.081 [2024-11-28 16:27:20.818253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:29.081 [2024-11-28 16:27:20.818298] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:29.081 [2024-11-28 16:27:20.818317] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:29.081 pt3 00:14:29.081 16:27:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.081 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:29.081 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:29.081 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:29.081 16:27:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.081 16:27:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.081 [2024-11-28 16:27:20.829813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:29.081 [2024-11-28 16:27:20.829915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.081 [2024-11-28 16:27:20.829934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:29.081 [2024-11-28 16:27:20.829944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.081 [2024-11-28 16:27:20.830229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.081 [2024-11-28 16:27:20.830247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:29.082 [2024-11-28 16:27:20.830290] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:29.082 [2024-11-28 16:27:20.830308] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:29.082 [2024-11-28 16:27:20.830402] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:29.082 [2024-11-28 16:27:20.830414] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:29.082 [2024-11-28 16:27:20.830642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:29.082 [2024-11-28 16:27:20.831121] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:29.082 [2024-11-28 16:27:20.831138] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:14:29.082 [2024-11-28 16:27:20.831231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.082 pt4 00:14:29.082 16:27:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.082 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:29.082 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:29.082 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:29.082 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.082 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.082 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.082 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.082 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.082 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.082 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.082 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.082 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.082 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.082 16:27:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.082 16:27:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.082 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.341 16:27:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.341 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.341 "name": "raid_bdev1", 00:14:29.341 "uuid": "eadc0186-8c25-4538-a72a-58322c59b276", 00:14:29.341 "strip_size_kb": 64, 00:14:29.341 "state": "online", 00:14:29.341 "raid_level": "raid5f", 00:14:29.341 "superblock": true, 00:14:29.341 "num_base_bdevs": 4, 00:14:29.341 "num_base_bdevs_discovered": 4, 00:14:29.341 "num_base_bdevs_operational": 4, 00:14:29.341 "base_bdevs_list": [ 00:14:29.341 { 00:14:29.341 "name": "pt1", 00:14:29.341 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.341 "is_configured": true, 00:14:29.341 "data_offset": 2048, 00:14:29.341 "data_size": 63488 00:14:29.341 }, 00:14:29.341 { 00:14:29.341 "name": "pt2", 00:14:29.341 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.341 "is_configured": true, 00:14:29.341 "data_offset": 2048, 00:14:29.341 "data_size": 63488 00:14:29.341 }, 00:14:29.341 { 00:14:29.341 "name": "pt3", 00:14:29.341 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:29.341 "is_configured": true, 00:14:29.341 "data_offset": 2048, 00:14:29.341 "data_size": 63488 00:14:29.341 }, 00:14:29.341 { 00:14:29.341 "name": "pt4", 00:14:29.341 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:29.341 "is_configured": true, 00:14:29.341 "data_offset": 2048, 00:14:29.341 "data_size": 63488 00:14:29.341 } 00:14:29.341 ] 00:14:29.341 }' 00:14:29.341 16:27:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.341 16:27:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.602 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:29.602 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:29.602 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:29.602 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:29.602 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:29.602 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:29.602 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:29.602 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:29.602 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.602 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.602 [2024-11-28 16:27:21.273344] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.602 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.602 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:29.602 "name": "raid_bdev1", 00:14:29.602 "aliases": [ 00:14:29.602 "eadc0186-8c25-4538-a72a-58322c59b276" 00:14:29.602 ], 00:14:29.602 "product_name": "Raid Volume", 00:14:29.602 "block_size": 512, 00:14:29.602 "num_blocks": 190464, 00:14:29.602 "uuid": "eadc0186-8c25-4538-a72a-58322c59b276", 00:14:29.602 "assigned_rate_limits": { 00:14:29.602 "rw_ios_per_sec": 0, 00:14:29.602 "rw_mbytes_per_sec": 0, 00:14:29.602 "r_mbytes_per_sec": 0, 00:14:29.602 "w_mbytes_per_sec": 0 00:14:29.602 }, 00:14:29.602 "claimed": false, 00:14:29.602 "zoned": false, 00:14:29.602 "supported_io_types": { 00:14:29.602 "read": true, 00:14:29.602 "write": true, 00:14:29.602 "unmap": false, 00:14:29.602 "flush": false, 00:14:29.602 "reset": true, 00:14:29.602 "nvme_admin": false, 00:14:29.602 "nvme_io": false, 00:14:29.602 "nvme_io_md": false, 00:14:29.602 "write_zeroes": true, 00:14:29.602 "zcopy": false, 00:14:29.602 "get_zone_info": false, 00:14:29.602 "zone_management": false, 00:14:29.602 "zone_append": false, 00:14:29.602 "compare": false, 00:14:29.602 "compare_and_write": false, 00:14:29.602 "abort": false, 00:14:29.602 "seek_hole": false, 00:14:29.602 "seek_data": false, 00:14:29.602 "copy": false, 00:14:29.602 "nvme_iov_md": false 00:14:29.602 }, 00:14:29.602 "driver_specific": { 00:14:29.602 "raid": { 00:14:29.602 "uuid": "eadc0186-8c25-4538-a72a-58322c59b276", 00:14:29.602 "strip_size_kb": 64, 00:14:29.602 "state": "online", 00:14:29.602 "raid_level": "raid5f", 00:14:29.602 "superblock": true, 00:14:29.602 "num_base_bdevs": 4, 00:14:29.602 "num_base_bdevs_discovered": 4, 00:14:29.602 "num_base_bdevs_operational": 4, 00:14:29.602 "base_bdevs_list": [ 00:14:29.602 { 00:14:29.602 "name": "pt1", 00:14:29.602 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.602 "is_configured": true, 00:14:29.602 "data_offset": 2048, 00:14:29.602 "data_size": 63488 00:14:29.602 }, 00:14:29.602 { 00:14:29.602 "name": "pt2", 00:14:29.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.602 "is_configured": true, 00:14:29.602 "data_offset": 2048, 00:14:29.602 "data_size": 63488 00:14:29.602 }, 00:14:29.602 { 00:14:29.602 "name": "pt3", 00:14:29.602 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:29.602 "is_configured": true, 00:14:29.602 "data_offset": 2048, 00:14:29.602 "data_size": 63488 00:14:29.602 }, 00:14:29.602 { 00:14:29.602 "name": "pt4", 00:14:29.602 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:29.602 "is_configured": true, 00:14:29.602 "data_offset": 2048, 00:14:29.602 "data_size": 63488 00:14:29.602 } 00:14:29.602 ] 00:14:29.602 } 00:14:29.602 } 00:14:29.602 }' 00:14:29.602 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:29.602 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:29.602 pt2 00:14:29.602 pt3 00:14:29.602 pt4' 00:14:29.602 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.862 [2024-11-28 16:27:21.568803] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' eadc0186-8c25-4538-a72a-58322c59b276 '!=' eadc0186-8c25-4538-a72a-58322c59b276 ']' 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.862 [2024-11-28 16:27:21.596628] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.862 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.863 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.863 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.863 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.863 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.863 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.863 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.863 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.863 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.863 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.863 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.863 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.863 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.122 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.122 "name": "raid_bdev1", 00:14:30.122 "uuid": "eadc0186-8c25-4538-a72a-58322c59b276", 00:14:30.122 "strip_size_kb": 64, 00:14:30.122 "state": "online", 00:14:30.122 "raid_level": "raid5f", 00:14:30.122 "superblock": true, 00:14:30.122 "num_base_bdevs": 4, 00:14:30.122 "num_base_bdevs_discovered": 3, 00:14:30.122 "num_base_bdevs_operational": 3, 00:14:30.122 "base_bdevs_list": [ 00:14:30.122 { 00:14:30.122 "name": null, 00:14:30.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.122 "is_configured": false, 00:14:30.122 "data_offset": 0, 00:14:30.122 "data_size": 63488 00:14:30.122 }, 00:14:30.122 { 00:14:30.122 "name": "pt2", 00:14:30.122 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.122 "is_configured": true, 00:14:30.122 "data_offset": 2048, 00:14:30.122 "data_size": 63488 00:14:30.122 }, 00:14:30.122 { 00:14:30.122 "name": "pt3", 00:14:30.122 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:30.122 "is_configured": true, 00:14:30.122 "data_offset": 2048, 00:14:30.122 "data_size": 63488 00:14:30.122 }, 00:14:30.122 { 00:14:30.122 "name": "pt4", 00:14:30.122 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:30.122 "is_configured": true, 00:14:30.122 "data_offset": 2048, 00:14:30.122 "data_size": 63488 00:14:30.122 } 00:14:30.122 ] 00:14:30.122 }' 00:14:30.122 16:27:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.122 16:27:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.380 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:30.380 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.380 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.380 [2024-11-28 16:27:22.071777] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:30.380 [2024-11-28 16:27:22.071852] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:30.380 [2024-11-28 16:27:22.071941] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:30.380 [2024-11-28 16:27:22.072002] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:30.380 [2024-11-28 16:27:22.072015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:14:30.380 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.380 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.380 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:30.380 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.380 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.380 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.380 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:30.380 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:30.380 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:30.380 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:30.380 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:30.380 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.380 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.380 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.380 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:30.380 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:30.380 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:30.380 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.380 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.639 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.640 [2024-11-28 16:27:22.167615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:30.640 [2024-11-28 16:27:22.167671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.640 [2024-11-28 16:27:22.167686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:30.640 [2024-11-28 16:27:22.167696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.640 [2024-11-28 16:27:22.170118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.640 [2024-11-28 16:27:22.170156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:30.640 [2024-11-28 16:27:22.170212] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:30.640 [2024-11-28 16:27:22.170244] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:30.640 pt2 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.640 "name": "raid_bdev1", 00:14:30.640 "uuid": "eadc0186-8c25-4538-a72a-58322c59b276", 00:14:30.640 "strip_size_kb": 64, 00:14:30.640 "state": "configuring", 00:14:30.640 "raid_level": "raid5f", 00:14:30.640 "superblock": true, 00:14:30.640 "num_base_bdevs": 4, 00:14:30.640 "num_base_bdevs_discovered": 1, 00:14:30.640 "num_base_bdevs_operational": 3, 00:14:30.640 "base_bdevs_list": [ 00:14:30.640 { 00:14:30.640 "name": null, 00:14:30.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.640 "is_configured": false, 00:14:30.640 "data_offset": 2048, 00:14:30.640 "data_size": 63488 00:14:30.640 }, 00:14:30.640 { 00:14:30.640 "name": "pt2", 00:14:30.640 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.640 "is_configured": true, 00:14:30.640 "data_offset": 2048, 00:14:30.640 "data_size": 63488 00:14:30.640 }, 00:14:30.640 { 00:14:30.640 "name": null, 00:14:30.640 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:30.640 "is_configured": false, 00:14:30.640 "data_offset": 2048, 00:14:30.640 "data_size": 63488 00:14:30.640 }, 00:14:30.640 { 00:14:30.640 "name": null, 00:14:30.640 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:30.640 "is_configured": false, 00:14:30.640 "data_offset": 2048, 00:14:30.640 "data_size": 63488 00:14:30.640 } 00:14:30.640 ] 00:14:30.640 }' 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.640 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.899 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:30.899 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:30.899 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:30.899 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.899 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.899 [2024-11-28 16:27:22.646793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:30.899 [2024-11-28 16:27:22.646853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.899 [2024-11-28 16:27:22.646868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:30.899 [2024-11-28 16:27:22.646880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.899 [2024-11-28 16:27:22.647217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.899 [2024-11-28 16:27:22.647241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:30.899 [2024-11-28 16:27:22.647292] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:30.899 [2024-11-28 16:27:22.647323] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:30.899 pt3 00:14:30.899 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.899 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:30.899 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.899 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.899 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.899 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.899 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.899 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.899 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.899 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.899 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.899 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.899 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.899 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.899 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.158 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.158 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.158 "name": "raid_bdev1", 00:14:31.158 "uuid": "eadc0186-8c25-4538-a72a-58322c59b276", 00:14:31.158 "strip_size_kb": 64, 00:14:31.158 "state": "configuring", 00:14:31.158 "raid_level": "raid5f", 00:14:31.158 "superblock": true, 00:14:31.158 "num_base_bdevs": 4, 00:14:31.158 "num_base_bdevs_discovered": 2, 00:14:31.158 "num_base_bdevs_operational": 3, 00:14:31.158 "base_bdevs_list": [ 00:14:31.158 { 00:14:31.158 "name": null, 00:14:31.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.158 "is_configured": false, 00:14:31.158 "data_offset": 2048, 00:14:31.158 "data_size": 63488 00:14:31.158 }, 00:14:31.158 { 00:14:31.158 "name": "pt2", 00:14:31.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:31.158 "is_configured": true, 00:14:31.158 "data_offset": 2048, 00:14:31.158 "data_size": 63488 00:14:31.158 }, 00:14:31.158 { 00:14:31.158 "name": "pt3", 00:14:31.158 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:31.158 "is_configured": true, 00:14:31.158 "data_offset": 2048, 00:14:31.158 "data_size": 63488 00:14:31.158 }, 00:14:31.158 { 00:14:31.158 "name": null, 00:14:31.158 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:31.158 "is_configured": false, 00:14:31.158 "data_offset": 2048, 00:14:31.158 "data_size": 63488 00:14:31.158 } 00:14:31.158 ] 00:14:31.158 }' 00:14:31.158 16:27:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.158 16:27:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.418 [2024-11-28 16:27:23.054060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:31.418 [2024-11-28 16:27:23.054159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.418 [2024-11-28 16:27:23.054196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:31.418 [2024-11-28 16:27:23.054233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.418 [2024-11-28 16:27:23.054557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.418 [2024-11-28 16:27:23.054617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:31.418 [2024-11-28 16:27:23.054694] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:31.418 [2024-11-28 16:27:23.054744] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:31.418 [2024-11-28 16:27:23.054863] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:31.418 [2024-11-28 16:27:23.054903] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:31.418 [2024-11-28 16:27:23.055166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:31.418 [2024-11-28 16:27:23.055722] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:31.418 [2024-11-28 16:27:23.055766] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:14:31.418 [2024-11-28 16:27:23.056046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.418 pt4 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.418 "name": "raid_bdev1", 00:14:31.418 "uuid": "eadc0186-8c25-4538-a72a-58322c59b276", 00:14:31.418 "strip_size_kb": 64, 00:14:31.418 "state": "online", 00:14:31.418 "raid_level": "raid5f", 00:14:31.418 "superblock": true, 00:14:31.418 "num_base_bdevs": 4, 00:14:31.418 "num_base_bdevs_discovered": 3, 00:14:31.418 "num_base_bdevs_operational": 3, 00:14:31.418 "base_bdevs_list": [ 00:14:31.418 { 00:14:31.418 "name": null, 00:14:31.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.418 "is_configured": false, 00:14:31.418 "data_offset": 2048, 00:14:31.418 "data_size": 63488 00:14:31.418 }, 00:14:31.418 { 00:14:31.418 "name": "pt2", 00:14:31.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:31.418 "is_configured": true, 00:14:31.418 "data_offset": 2048, 00:14:31.418 "data_size": 63488 00:14:31.418 }, 00:14:31.418 { 00:14:31.418 "name": "pt3", 00:14:31.418 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:31.418 "is_configured": true, 00:14:31.418 "data_offset": 2048, 00:14:31.418 "data_size": 63488 00:14:31.418 }, 00:14:31.418 { 00:14:31.418 "name": "pt4", 00:14:31.418 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:31.418 "is_configured": true, 00:14:31.418 "data_offset": 2048, 00:14:31.418 "data_size": 63488 00:14:31.418 } 00:14:31.418 ] 00:14:31.418 }' 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.418 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.988 [2024-11-28 16:27:23.505823] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.988 [2024-11-28 16:27:23.505897] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.988 [2024-11-28 16:27:23.505952] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.988 [2024-11-28 16:27:23.506015] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.988 [2024-11-28 16:27:23.506024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.988 [2024-11-28 16:27:23.561773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:31.988 [2024-11-28 16:27:23.561866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.988 [2024-11-28 16:27:23.561919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:14:31.988 [2024-11-28 16:27:23.561948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.988 [2024-11-28 16:27:23.564383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.988 [2024-11-28 16:27:23.564454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:31.988 [2024-11-28 16:27:23.564530] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:31.988 [2024-11-28 16:27:23.564590] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:31.988 [2024-11-28 16:27:23.564714] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:31.988 [2024-11-28 16:27:23.564763] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.988 [2024-11-28 16:27:23.564825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:14:31.988 [2024-11-28 16:27:23.564917] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:31.988 [2024-11-28 16:27:23.565062] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:31.988 pt1 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.988 "name": "raid_bdev1", 00:14:31.988 "uuid": "eadc0186-8c25-4538-a72a-58322c59b276", 00:14:31.988 "strip_size_kb": 64, 00:14:31.988 "state": "configuring", 00:14:31.988 "raid_level": "raid5f", 00:14:31.988 "superblock": true, 00:14:31.988 "num_base_bdevs": 4, 00:14:31.988 "num_base_bdevs_discovered": 2, 00:14:31.988 "num_base_bdevs_operational": 3, 00:14:31.988 "base_bdevs_list": [ 00:14:31.988 { 00:14:31.988 "name": null, 00:14:31.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.988 "is_configured": false, 00:14:31.988 "data_offset": 2048, 00:14:31.988 "data_size": 63488 00:14:31.988 }, 00:14:31.988 { 00:14:31.988 "name": "pt2", 00:14:31.988 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:31.988 "is_configured": true, 00:14:31.988 "data_offset": 2048, 00:14:31.988 "data_size": 63488 00:14:31.988 }, 00:14:31.988 { 00:14:31.988 "name": "pt3", 00:14:31.988 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:31.988 "is_configured": true, 00:14:31.988 "data_offset": 2048, 00:14:31.988 "data_size": 63488 00:14:31.988 }, 00:14:31.988 { 00:14:31.988 "name": null, 00:14:31.988 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:31.988 "is_configured": false, 00:14:31.988 "data_offset": 2048, 00:14:31.988 "data_size": 63488 00:14:31.988 } 00:14:31.988 ] 00:14:31.988 }' 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.988 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.247 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:32.247 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.247 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.247 16:27:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:32.247 16:27:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.506 16:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:32.506 16:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:32.506 16:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.506 16:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.506 [2024-11-28 16:27:24.036920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:32.506 [2024-11-28 16:27:24.036971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.506 [2024-11-28 16:27:24.036986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:32.506 [2024-11-28 16:27:24.036997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.507 [2024-11-28 16:27:24.037366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.507 [2024-11-28 16:27:24.037386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:32.507 [2024-11-28 16:27:24.037437] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:32.507 [2024-11-28 16:27:24.037458] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:32.507 [2024-11-28 16:27:24.037540] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:32.507 [2024-11-28 16:27:24.037553] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:32.507 [2024-11-28 16:27:24.037775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:32.507 [2024-11-28 16:27:24.038316] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:32.507 [2024-11-28 16:27:24.038335] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:32.507 [2024-11-28 16:27:24.038497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.507 pt4 00:14:32.507 16:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.507 16:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:32.507 16:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.507 16:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.507 16:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.507 16:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.507 16:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.507 16:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.507 16:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.507 16:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.507 16:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.507 16:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.507 16:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.507 16:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.507 16:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.507 16:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.507 16:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.507 "name": "raid_bdev1", 00:14:32.507 "uuid": "eadc0186-8c25-4538-a72a-58322c59b276", 00:14:32.507 "strip_size_kb": 64, 00:14:32.507 "state": "online", 00:14:32.507 "raid_level": "raid5f", 00:14:32.507 "superblock": true, 00:14:32.507 "num_base_bdevs": 4, 00:14:32.507 "num_base_bdevs_discovered": 3, 00:14:32.507 "num_base_bdevs_operational": 3, 00:14:32.507 "base_bdevs_list": [ 00:14:32.507 { 00:14:32.507 "name": null, 00:14:32.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.507 "is_configured": false, 00:14:32.507 "data_offset": 2048, 00:14:32.507 "data_size": 63488 00:14:32.507 }, 00:14:32.507 { 00:14:32.507 "name": "pt2", 00:14:32.507 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:32.507 "is_configured": true, 00:14:32.507 "data_offset": 2048, 00:14:32.507 "data_size": 63488 00:14:32.507 }, 00:14:32.507 { 00:14:32.507 "name": "pt3", 00:14:32.507 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:32.507 "is_configured": true, 00:14:32.507 "data_offset": 2048, 00:14:32.507 "data_size": 63488 00:14:32.507 }, 00:14:32.507 { 00:14:32.507 "name": "pt4", 00:14:32.507 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:32.507 "is_configured": true, 00:14:32.507 "data_offset": 2048, 00:14:32.507 "data_size": 63488 00:14:32.507 } 00:14:32.507 ] 00:14:32.507 }' 00:14:32.507 16:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.507 16:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.766 16:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:32.766 16:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:32.766 16:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.766 16:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.766 16:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.766 16:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:32.766 16:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:32.767 16:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.767 16:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.767 16:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:32.767 [2024-11-28 16:27:24.504316] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:32.767 16:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.026 16:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' eadc0186-8c25-4538-a72a-58322c59b276 '!=' eadc0186-8c25-4538-a72a-58322c59b276 ']' 00:14:33.026 16:27:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 94564 00:14:33.026 16:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 94564 ']' 00:14:33.026 16:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 94564 00:14:33.026 16:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:33.026 16:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:33.026 16:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94564 00:14:33.026 killing process with pid 94564 00:14:33.026 16:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:33.026 16:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:33.026 16:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94564' 00:14:33.026 16:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 94564 00:14:33.026 [2024-11-28 16:27:24.585750] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:33.026 [2024-11-28 16:27:24.585811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:33.026 [2024-11-28 16:27:24.585887] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:33.026 [2024-11-28 16:27:24.585897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:33.026 16:27:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 94564 00:14:33.026 [2024-11-28 16:27:24.666346] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:33.286 16:27:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:33.286 00:14:33.286 real 0m7.297s 00:14:33.286 user 0m12.011s 00:14:33.286 sys 0m1.666s 00:14:33.286 16:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:33.286 ************************************ 00:14:33.286 END TEST raid5f_superblock_test 00:14:33.286 ************************************ 00:14:33.286 16:27:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.545 16:27:25 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:33.545 16:27:25 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:14:33.545 16:27:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:33.545 16:27:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:33.545 16:27:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:33.545 ************************************ 00:14:33.545 START TEST raid5f_rebuild_test 00:14:33.545 ************************************ 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=95033 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 95033 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 95033 ']' 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:33.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:33.545 16:27:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.545 [2024-11-28 16:27:25.220316] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:33.545 [2024-11-28 16:27:25.220511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95033 ] 00:14:33.545 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:33.545 Zero copy mechanism will not be used. 00:14:33.804 [2024-11-28 16:27:25.380483] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.804 [2024-11-28 16:27:25.447724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.804 [2024-11-28 16:27:25.523036] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.804 [2024-11-28 16:27:25.523152] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.372 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:34.372 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:14:34.372 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:34.372 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:34.372 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.372 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.372 BaseBdev1_malloc 00:14:34.372 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.372 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:34.372 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.372 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.372 [2024-11-28 16:27:26.064702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:34.372 [2024-11-28 16:27:26.064852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.372 [2024-11-28 16:27:26.064881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:34.372 [2024-11-28 16:27:26.064896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.372 [2024-11-28 16:27:26.067255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.372 [2024-11-28 16:27:26.067289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:34.372 BaseBdev1 00:14:34.372 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.372 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:34.372 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:34.372 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.372 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.372 BaseBdev2_malloc 00:14:34.372 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.372 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:34.373 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.373 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.373 [2024-11-28 16:27:26.112121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:34.373 [2024-11-28 16:27:26.112215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.373 [2024-11-28 16:27:26.112254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:34.373 [2024-11-28 16:27:26.112272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.373 [2024-11-28 16:27:26.116818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.373 [2024-11-28 16:27:26.116902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:34.373 BaseBdev2 00:14:34.373 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.373 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:34.373 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:34.373 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.373 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.633 BaseBdev3_malloc 00:14:34.633 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.633 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:34.633 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.633 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.633 [2024-11-28 16:27:26.149136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:34.633 [2024-11-28 16:27:26.149181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.633 [2024-11-28 16:27:26.149204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:34.633 [2024-11-28 16:27:26.149213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.633 [2024-11-28 16:27:26.151409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.633 [2024-11-28 16:27:26.151441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:34.633 BaseBdev3 00:14:34.633 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.633 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:34.633 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:34.633 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.633 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.633 BaseBdev4_malloc 00:14:34.633 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.633 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:34.633 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.633 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.633 [2024-11-28 16:27:26.183465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:34.633 [2024-11-28 16:27:26.183591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.633 [2024-11-28 16:27:26.183621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:34.633 [2024-11-28 16:27:26.183630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.633 [2024-11-28 16:27:26.185822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.633 [2024-11-28 16:27:26.185861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:34.633 BaseBdev4 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.634 spare_malloc 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.634 spare_delay 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.634 [2024-11-28 16:27:26.229691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:34.634 [2024-11-28 16:27:26.229740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.634 [2024-11-28 16:27:26.229761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:34.634 [2024-11-28 16:27:26.229769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.634 [2024-11-28 16:27:26.232021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.634 [2024-11-28 16:27:26.232054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:34.634 spare 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.634 [2024-11-28 16:27:26.241765] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.634 [2024-11-28 16:27:26.243825] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:34.634 [2024-11-28 16:27:26.243906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:34.634 [2024-11-28 16:27:26.243944] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:34.634 [2024-11-28 16:27:26.244023] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:34.634 [2024-11-28 16:27:26.244037] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:34.634 [2024-11-28 16:27:26.244266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:34.634 [2024-11-28 16:27:26.244702] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:34.634 [2024-11-28 16:27:26.244715] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:34.634 [2024-11-28 16:27:26.244825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.634 "name": "raid_bdev1", 00:14:34.634 "uuid": "0ea17aed-2cef-4e6d-b20c-4909e397cb38", 00:14:34.634 "strip_size_kb": 64, 00:14:34.634 "state": "online", 00:14:34.634 "raid_level": "raid5f", 00:14:34.634 "superblock": false, 00:14:34.634 "num_base_bdevs": 4, 00:14:34.634 "num_base_bdevs_discovered": 4, 00:14:34.634 "num_base_bdevs_operational": 4, 00:14:34.634 "base_bdevs_list": [ 00:14:34.634 { 00:14:34.634 "name": "BaseBdev1", 00:14:34.634 "uuid": "8949285b-5891-58d6-a8c6-6983a977e48a", 00:14:34.634 "is_configured": true, 00:14:34.634 "data_offset": 0, 00:14:34.634 "data_size": 65536 00:14:34.634 }, 00:14:34.634 { 00:14:34.634 "name": "BaseBdev2", 00:14:34.634 "uuid": "7e0b340c-b86a-5a42-8a26-91f3e3132179", 00:14:34.634 "is_configured": true, 00:14:34.634 "data_offset": 0, 00:14:34.634 "data_size": 65536 00:14:34.634 }, 00:14:34.634 { 00:14:34.634 "name": "BaseBdev3", 00:14:34.634 "uuid": "4a25d4bd-a763-5a04-8997-cda4ec68e464", 00:14:34.634 "is_configured": true, 00:14:34.634 "data_offset": 0, 00:14:34.634 "data_size": 65536 00:14:34.634 }, 00:14:34.634 { 00:14:34.634 "name": "BaseBdev4", 00:14:34.634 "uuid": "a849b3dc-46a9-5195-b488-8bcfe3d524a5", 00:14:34.634 "is_configured": true, 00:14:34.634 "data_offset": 0, 00:14:34.634 "data_size": 65536 00:14:34.634 } 00:14:34.634 ] 00:14:34.634 }' 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.634 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.204 [2024-11-28 16:27:26.679115] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.204 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:35.204 [2024-11-28 16:27:26.942550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:35.204 /dev/nbd0 00:14:35.463 16:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:35.463 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:35.463 16:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:35.463 16:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:35.463 16:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:35.463 16:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:35.463 16:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:35.463 16:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:35.463 16:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:35.463 16:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:35.463 16:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:35.463 1+0 records in 00:14:35.463 1+0 records out 00:14:35.463 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000624551 s, 6.6 MB/s 00:14:35.463 16:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.463 16:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:35.463 16:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.463 16:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:35.463 16:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:35.463 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:35.463 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.463 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:35.463 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:35.463 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:35.463 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:14:36.032 512+0 records in 00:14:36.032 512+0 records out 00:14:36.032 100663296 bytes (101 MB, 96 MiB) copied, 0.669222 s, 150 MB/s 00:14:36.032 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:36.032 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.032 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:36.032 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:36.032 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:36.032 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:36.032 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:36.293 [2024-11-28 16:27:27.914019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.293 [2024-11-28 16:27:27.923787] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.293 "name": "raid_bdev1", 00:14:36.293 "uuid": "0ea17aed-2cef-4e6d-b20c-4909e397cb38", 00:14:36.293 "strip_size_kb": 64, 00:14:36.293 "state": "online", 00:14:36.293 "raid_level": "raid5f", 00:14:36.293 "superblock": false, 00:14:36.293 "num_base_bdevs": 4, 00:14:36.293 "num_base_bdevs_discovered": 3, 00:14:36.293 "num_base_bdevs_operational": 3, 00:14:36.293 "base_bdevs_list": [ 00:14:36.293 { 00:14:36.293 "name": null, 00:14:36.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.293 "is_configured": false, 00:14:36.293 "data_offset": 0, 00:14:36.293 "data_size": 65536 00:14:36.293 }, 00:14:36.293 { 00:14:36.293 "name": "BaseBdev2", 00:14:36.293 "uuid": "7e0b340c-b86a-5a42-8a26-91f3e3132179", 00:14:36.293 "is_configured": true, 00:14:36.293 "data_offset": 0, 00:14:36.293 "data_size": 65536 00:14:36.293 }, 00:14:36.293 { 00:14:36.293 "name": "BaseBdev3", 00:14:36.293 "uuid": "4a25d4bd-a763-5a04-8997-cda4ec68e464", 00:14:36.293 "is_configured": true, 00:14:36.293 "data_offset": 0, 00:14:36.293 "data_size": 65536 00:14:36.293 }, 00:14:36.293 { 00:14:36.293 "name": "BaseBdev4", 00:14:36.293 "uuid": "a849b3dc-46a9-5195-b488-8bcfe3d524a5", 00:14:36.293 "is_configured": true, 00:14:36.293 "data_offset": 0, 00:14:36.293 "data_size": 65536 00:14:36.293 } 00:14:36.293 ] 00:14:36.293 }' 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.293 16:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.864 16:27:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:36.864 16:27:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.864 16:27:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.864 [2024-11-28 16:27:28.390974] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:36.864 [2024-11-28 16:27:28.394600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:14:36.864 [2024-11-28 16:27:28.396763] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:36.864 16:27:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.864 16:27:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:37.804 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.804 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.804 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.804 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.804 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.804 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.804 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.804 16:27:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.804 16:27:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.804 16:27:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.804 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.804 "name": "raid_bdev1", 00:14:37.804 "uuid": "0ea17aed-2cef-4e6d-b20c-4909e397cb38", 00:14:37.804 "strip_size_kb": 64, 00:14:37.804 "state": "online", 00:14:37.804 "raid_level": "raid5f", 00:14:37.804 "superblock": false, 00:14:37.804 "num_base_bdevs": 4, 00:14:37.804 "num_base_bdevs_discovered": 4, 00:14:37.804 "num_base_bdevs_operational": 4, 00:14:37.804 "process": { 00:14:37.804 "type": "rebuild", 00:14:37.804 "target": "spare", 00:14:37.804 "progress": { 00:14:37.804 "blocks": 19200, 00:14:37.805 "percent": 9 00:14:37.805 } 00:14:37.805 }, 00:14:37.805 "base_bdevs_list": [ 00:14:37.805 { 00:14:37.805 "name": "spare", 00:14:37.805 "uuid": "d4dd54d3-5503-5852-a062-386c67b17049", 00:14:37.805 "is_configured": true, 00:14:37.805 "data_offset": 0, 00:14:37.805 "data_size": 65536 00:14:37.805 }, 00:14:37.805 { 00:14:37.805 "name": "BaseBdev2", 00:14:37.805 "uuid": "7e0b340c-b86a-5a42-8a26-91f3e3132179", 00:14:37.805 "is_configured": true, 00:14:37.805 "data_offset": 0, 00:14:37.805 "data_size": 65536 00:14:37.805 }, 00:14:37.805 { 00:14:37.805 "name": "BaseBdev3", 00:14:37.805 "uuid": "4a25d4bd-a763-5a04-8997-cda4ec68e464", 00:14:37.805 "is_configured": true, 00:14:37.805 "data_offset": 0, 00:14:37.805 "data_size": 65536 00:14:37.805 }, 00:14:37.805 { 00:14:37.805 "name": "BaseBdev4", 00:14:37.805 "uuid": "a849b3dc-46a9-5195-b488-8bcfe3d524a5", 00:14:37.805 "is_configured": true, 00:14:37.805 "data_offset": 0, 00:14:37.805 "data_size": 65536 00:14:37.805 } 00:14:37.805 ] 00:14:37.805 }' 00:14:37.805 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.805 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.805 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.805 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.805 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:37.805 16:27:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.805 16:27:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.805 [2024-11-28 16:27:29.563684] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.065 [2024-11-28 16:27:29.602196] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:38.065 [2024-11-28 16:27:29.602250] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.065 [2024-11-28 16:27:29.602285] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.065 [2024-11-28 16:27:29.602293] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:38.065 16:27:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.065 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:38.065 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.065 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.065 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.065 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.065 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.065 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.065 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.065 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.065 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.065 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.065 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.065 16:27:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.065 16:27:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.065 16:27:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.065 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.065 "name": "raid_bdev1", 00:14:38.065 "uuid": "0ea17aed-2cef-4e6d-b20c-4909e397cb38", 00:14:38.065 "strip_size_kb": 64, 00:14:38.065 "state": "online", 00:14:38.065 "raid_level": "raid5f", 00:14:38.065 "superblock": false, 00:14:38.065 "num_base_bdevs": 4, 00:14:38.065 "num_base_bdevs_discovered": 3, 00:14:38.065 "num_base_bdevs_operational": 3, 00:14:38.065 "base_bdevs_list": [ 00:14:38.065 { 00:14:38.065 "name": null, 00:14:38.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.065 "is_configured": false, 00:14:38.065 "data_offset": 0, 00:14:38.065 "data_size": 65536 00:14:38.065 }, 00:14:38.065 { 00:14:38.065 "name": "BaseBdev2", 00:14:38.065 "uuid": "7e0b340c-b86a-5a42-8a26-91f3e3132179", 00:14:38.065 "is_configured": true, 00:14:38.065 "data_offset": 0, 00:14:38.065 "data_size": 65536 00:14:38.065 }, 00:14:38.065 { 00:14:38.065 "name": "BaseBdev3", 00:14:38.065 "uuid": "4a25d4bd-a763-5a04-8997-cda4ec68e464", 00:14:38.065 "is_configured": true, 00:14:38.065 "data_offset": 0, 00:14:38.065 "data_size": 65536 00:14:38.065 }, 00:14:38.065 { 00:14:38.065 "name": "BaseBdev4", 00:14:38.065 "uuid": "a849b3dc-46a9-5195-b488-8bcfe3d524a5", 00:14:38.065 "is_configured": true, 00:14:38.065 "data_offset": 0, 00:14:38.065 "data_size": 65536 00:14:38.065 } 00:14:38.065 ] 00:14:38.065 }' 00:14:38.065 16:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.065 16:27:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.325 16:27:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:38.325 16:27:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.325 16:27:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:38.325 16:27:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:38.325 16:27:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.584 16:27:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.584 16:27:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.584 16:27:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.585 16:27:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.585 16:27:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.585 16:27:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.585 "name": "raid_bdev1", 00:14:38.585 "uuid": "0ea17aed-2cef-4e6d-b20c-4909e397cb38", 00:14:38.585 "strip_size_kb": 64, 00:14:38.585 "state": "online", 00:14:38.585 "raid_level": "raid5f", 00:14:38.585 "superblock": false, 00:14:38.585 "num_base_bdevs": 4, 00:14:38.585 "num_base_bdevs_discovered": 3, 00:14:38.585 "num_base_bdevs_operational": 3, 00:14:38.585 "base_bdevs_list": [ 00:14:38.585 { 00:14:38.585 "name": null, 00:14:38.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.585 "is_configured": false, 00:14:38.585 "data_offset": 0, 00:14:38.585 "data_size": 65536 00:14:38.585 }, 00:14:38.585 { 00:14:38.585 "name": "BaseBdev2", 00:14:38.585 "uuid": "7e0b340c-b86a-5a42-8a26-91f3e3132179", 00:14:38.585 "is_configured": true, 00:14:38.585 "data_offset": 0, 00:14:38.585 "data_size": 65536 00:14:38.585 }, 00:14:38.585 { 00:14:38.585 "name": "BaseBdev3", 00:14:38.585 "uuid": "4a25d4bd-a763-5a04-8997-cda4ec68e464", 00:14:38.585 "is_configured": true, 00:14:38.585 "data_offset": 0, 00:14:38.585 "data_size": 65536 00:14:38.585 }, 00:14:38.585 { 00:14:38.585 "name": "BaseBdev4", 00:14:38.585 "uuid": "a849b3dc-46a9-5195-b488-8bcfe3d524a5", 00:14:38.585 "is_configured": true, 00:14:38.585 "data_offset": 0, 00:14:38.585 "data_size": 65536 00:14:38.585 } 00:14:38.585 ] 00:14:38.585 }' 00:14:38.585 16:27:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.585 16:27:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:38.585 16:27:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.585 16:27:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.585 16:27:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:38.585 16:27:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.585 16:27:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.585 [2024-11-28 16:27:30.250543] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.585 [2024-11-28 16:27:30.253866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:14:38.585 [2024-11-28 16:27:30.255904] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:38.585 16:27:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.585 16:27:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:39.526 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.526 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.526 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.526 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.526 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.526 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.526 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.526 16:27:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.526 16:27:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.526 16:27:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.787 "name": "raid_bdev1", 00:14:39.787 "uuid": "0ea17aed-2cef-4e6d-b20c-4909e397cb38", 00:14:39.787 "strip_size_kb": 64, 00:14:39.787 "state": "online", 00:14:39.787 "raid_level": "raid5f", 00:14:39.787 "superblock": false, 00:14:39.787 "num_base_bdevs": 4, 00:14:39.787 "num_base_bdevs_discovered": 4, 00:14:39.787 "num_base_bdevs_operational": 4, 00:14:39.787 "process": { 00:14:39.787 "type": "rebuild", 00:14:39.787 "target": "spare", 00:14:39.787 "progress": { 00:14:39.787 "blocks": 19200, 00:14:39.787 "percent": 9 00:14:39.787 } 00:14:39.787 }, 00:14:39.787 "base_bdevs_list": [ 00:14:39.787 { 00:14:39.787 "name": "spare", 00:14:39.787 "uuid": "d4dd54d3-5503-5852-a062-386c67b17049", 00:14:39.787 "is_configured": true, 00:14:39.787 "data_offset": 0, 00:14:39.787 "data_size": 65536 00:14:39.787 }, 00:14:39.787 { 00:14:39.787 "name": "BaseBdev2", 00:14:39.787 "uuid": "7e0b340c-b86a-5a42-8a26-91f3e3132179", 00:14:39.787 "is_configured": true, 00:14:39.787 "data_offset": 0, 00:14:39.787 "data_size": 65536 00:14:39.787 }, 00:14:39.787 { 00:14:39.787 "name": "BaseBdev3", 00:14:39.787 "uuid": "4a25d4bd-a763-5a04-8997-cda4ec68e464", 00:14:39.787 "is_configured": true, 00:14:39.787 "data_offset": 0, 00:14:39.787 "data_size": 65536 00:14:39.787 }, 00:14:39.787 { 00:14:39.787 "name": "BaseBdev4", 00:14:39.787 "uuid": "a849b3dc-46a9-5195-b488-8bcfe3d524a5", 00:14:39.787 "is_configured": true, 00:14:39.787 "data_offset": 0, 00:14:39.787 "data_size": 65536 00:14:39.787 } 00:14:39.787 ] 00:14:39.787 }' 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=510 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.787 "name": "raid_bdev1", 00:14:39.787 "uuid": "0ea17aed-2cef-4e6d-b20c-4909e397cb38", 00:14:39.787 "strip_size_kb": 64, 00:14:39.787 "state": "online", 00:14:39.787 "raid_level": "raid5f", 00:14:39.787 "superblock": false, 00:14:39.787 "num_base_bdevs": 4, 00:14:39.787 "num_base_bdevs_discovered": 4, 00:14:39.787 "num_base_bdevs_operational": 4, 00:14:39.787 "process": { 00:14:39.787 "type": "rebuild", 00:14:39.787 "target": "spare", 00:14:39.787 "progress": { 00:14:39.787 "blocks": 21120, 00:14:39.787 "percent": 10 00:14:39.787 } 00:14:39.787 }, 00:14:39.787 "base_bdevs_list": [ 00:14:39.787 { 00:14:39.787 "name": "spare", 00:14:39.787 "uuid": "d4dd54d3-5503-5852-a062-386c67b17049", 00:14:39.787 "is_configured": true, 00:14:39.787 "data_offset": 0, 00:14:39.787 "data_size": 65536 00:14:39.787 }, 00:14:39.787 { 00:14:39.787 "name": "BaseBdev2", 00:14:39.787 "uuid": "7e0b340c-b86a-5a42-8a26-91f3e3132179", 00:14:39.787 "is_configured": true, 00:14:39.787 "data_offset": 0, 00:14:39.787 "data_size": 65536 00:14:39.787 }, 00:14:39.787 { 00:14:39.787 "name": "BaseBdev3", 00:14:39.787 "uuid": "4a25d4bd-a763-5a04-8997-cda4ec68e464", 00:14:39.787 "is_configured": true, 00:14:39.787 "data_offset": 0, 00:14:39.787 "data_size": 65536 00:14:39.787 }, 00:14:39.787 { 00:14:39.787 "name": "BaseBdev4", 00:14:39.787 "uuid": "a849b3dc-46a9-5195-b488-8bcfe3d524a5", 00:14:39.787 "is_configured": true, 00:14:39.787 "data_offset": 0, 00:14:39.787 "data_size": 65536 00:14:39.787 } 00:14:39.787 ] 00:14:39.787 }' 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.787 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.047 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.047 16:27:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:40.986 16:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:40.987 16:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.987 16:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.987 16:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.987 16:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.987 16:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.987 16:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.987 16:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.987 16:27:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.987 16:27:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.987 16:27:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.987 16:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.987 "name": "raid_bdev1", 00:14:40.987 "uuid": "0ea17aed-2cef-4e6d-b20c-4909e397cb38", 00:14:40.987 "strip_size_kb": 64, 00:14:40.987 "state": "online", 00:14:40.987 "raid_level": "raid5f", 00:14:40.987 "superblock": false, 00:14:40.987 "num_base_bdevs": 4, 00:14:40.987 "num_base_bdevs_discovered": 4, 00:14:40.987 "num_base_bdevs_operational": 4, 00:14:40.987 "process": { 00:14:40.987 "type": "rebuild", 00:14:40.987 "target": "spare", 00:14:40.987 "progress": { 00:14:40.987 "blocks": 44160, 00:14:40.987 "percent": 22 00:14:40.987 } 00:14:40.987 }, 00:14:40.987 "base_bdevs_list": [ 00:14:40.987 { 00:14:40.987 "name": "spare", 00:14:40.987 "uuid": "d4dd54d3-5503-5852-a062-386c67b17049", 00:14:40.987 "is_configured": true, 00:14:40.987 "data_offset": 0, 00:14:40.987 "data_size": 65536 00:14:40.987 }, 00:14:40.987 { 00:14:40.987 "name": "BaseBdev2", 00:14:40.987 "uuid": "7e0b340c-b86a-5a42-8a26-91f3e3132179", 00:14:40.987 "is_configured": true, 00:14:40.987 "data_offset": 0, 00:14:40.987 "data_size": 65536 00:14:40.987 }, 00:14:40.987 { 00:14:40.987 "name": "BaseBdev3", 00:14:40.987 "uuid": "4a25d4bd-a763-5a04-8997-cda4ec68e464", 00:14:40.987 "is_configured": true, 00:14:40.987 "data_offset": 0, 00:14:40.987 "data_size": 65536 00:14:40.987 }, 00:14:40.987 { 00:14:40.987 "name": "BaseBdev4", 00:14:40.987 "uuid": "a849b3dc-46a9-5195-b488-8bcfe3d524a5", 00:14:40.987 "is_configured": true, 00:14:40.987 "data_offset": 0, 00:14:40.987 "data_size": 65536 00:14:40.987 } 00:14:40.987 ] 00:14:40.987 }' 00:14:40.987 16:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.987 16:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.987 16:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.987 16:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.987 16:27:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:42.369 16:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.369 16:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.369 16:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.369 16:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.369 16:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.369 16:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.369 16:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.369 16:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.369 16:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.369 16:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.369 16:27:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.369 16:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.369 "name": "raid_bdev1", 00:14:42.369 "uuid": "0ea17aed-2cef-4e6d-b20c-4909e397cb38", 00:14:42.369 "strip_size_kb": 64, 00:14:42.369 "state": "online", 00:14:42.369 "raid_level": "raid5f", 00:14:42.369 "superblock": false, 00:14:42.369 "num_base_bdevs": 4, 00:14:42.369 "num_base_bdevs_discovered": 4, 00:14:42.369 "num_base_bdevs_operational": 4, 00:14:42.369 "process": { 00:14:42.369 "type": "rebuild", 00:14:42.369 "target": "spare", 00:14:42.369 "progress": { 00:14:42.369 "blocks": 65280, 00:14:42.369 "percent": 33 00:14:42.369 } 00:14:42.369 }, 00:14:42.369 "base_bdevs_list": [ 00:14:42.369 { 00:14:42.369 "name": "spare", 00:14:42.369 "uuid": "d4dd54d3-5503-5852-a062-386c67b17049", 00:14:42.369 "is_configured": true, 00:14:42.369 "data_offset": 0, 00:14:42.369 "data_size": 65536 00:14:42.369 }, 00:14:42.369 { 00:14:42.369 "name": "BaseBdev2", 00:14:42.369 "uuid": "7e0b340c-b86a-5a42-8a26-91f3e3132179", 00:14:42.369 "is_configured": true, 00:14:42.369 "data_offset": 0, 00:14:42.369 "data_size": 65536 00:14:42.369 }, 00:14:42.369 { 00:14:42.369 "name": "BaseBdev3", 00:14:42.369 "uuid": "4a25d4bd-a763-5a04-8997-cda4ec68e464", 00:14:42.369 "is_configured": true, 00:14:42.369 "data_offset": 0, 00:14:42.369 "data_size": 65536 00:14:42.369 }, 00:14:42.369 { 00:14:42.369 "name": "BaseBdev4", 00:14:42.369 "uuid": "a849b3dc-46a9-5195-b488-8bcfe3d524a5", 00:14:42.369 "is_configured": true, 00:14:42.369 "data_offset": 0, 00:14:42.369 "data_size": 65536 00:14:42.369 } 00:14:42.369 ] 00:14:42.369 }' 00:14:42.369 16:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.369 16:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.369 16:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.369 16:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.369 16:27:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:43.309 16:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:43.309 16:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.309 16:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.309 16:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.309 16:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.309 16:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.309 16:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.309 16:27:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.309 16:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.309 16:27:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.309 16:27:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.309 16:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.309 "name": "raid_bdev1", 00:14:43.309 "uuid": "0ea17aed-2cef-4e6d-b20c-4909e397cb38", 00:14:43.309 "strip_size_kb": 64, 00:14:43.309 "state": "online", 00:14:43.309 "raid_level": "raid5f", 00:14:43.309 "superblock": false, 00:14:43.309 "num_base_bdevs": 4, 00:14:43.309 "num_base_bdevs_discovered": 4, 00:14:43.309 "num_base_bdevs_operational": 4, 00:14:43.309 "process": { 00:14:43.309 "type": "rebuild", 00:14:43.309 "target": "spare", 00:14:43.309 "progress": { 00:14:43.309 "blocks": 88320, 00:14:43.309 "percent": 44 00:14:43.309 } 00:14:43.309 }, 00:14:43.309 "base_bdevs_list": [ 00:14:43.309 { 00:14:43.309 "name": "spare", 00:14:43.309 "uuid": "d4dd54d3-5503-5852-a062-386c67b17049", 00:14:43.309 "is_configured": true, 00:14:43.309 "data_offset": 0, 00:14:43.309 "data_size": 65536 00:14:43.309 }, 00:14:43.309 { 00:14:43.309 "name": "BaseBdev2", 00:14:43.309 "uuid": "7e0b340c-b86a-5a42-8a26-91f3e3132179", 00:14:43.309 "is_configured": true, 00:14:43.309 "data_offset": 0, 00:14:43.309 "data_size": 65536 00:14:43.309 }, 00:14:43.309 { 00:14:43.309 "name": "BaseBdev3", 00:14:43.309 "uuid": "4a25d4bd-a763-5a04-8997-cda4ec68e464", 00:14:43.309 "is_configured": true, 00:14:43.309 "data_offset": 0, 00:14:43.309 "data_size": 65536 00:14:43.309 }, 00:14:43.309 { 00:14:43.309 "name": "BaseBdev4", 00:14:43.309 "uuid": "a849b3dc-46a9-5195-b488-8bcfe3d524a5", 00:14:43.309 "is_configured": true, 00:14:43.309 "data_offset": 0, 00:14:43.309 "data_size": 65536 00:14:43.309 } 00:14:43.309 ] 00:14:43.309 }' 00:14:43.309 16:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.309 16:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.309 16:27:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.309 16:27:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.309 16:27:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:44.691 16:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:44.691 16:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.691 16:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.691 16:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.691 16:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.691 16:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.691 16:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.691 16:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.691 16:27:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.691 16:27:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.691 16:27:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.691 16:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.691 "name": "raid_bdev1", 00:14:44.691 "uuid": "0ea17aed-2cef-4e6d-b20c-4909e397cb38", 00:14:44.691 "strip_size_kb": 64, 00:14:44.691 "state": "online", 00:14:44.691 "raid_level": "raid5f", 00:14:44.691 "superblock": false, 00:14:44.691 "num_base_bdevs": 4, 00:14:44.691 "num_base_bdevs_discovered": 4, 00:14:44.691 "num_base_bdevs_operational": 4, 00:14:44.691 "process": { 00:14:44.691 "type": "rebuild", 00:14:44.691 "target": "spare", 00:14:44.691 "progress": { 00:14:44.691 "blocks": 109440, 00:14:44.691 "percent": 55 00:14:44.691 } 00:14:44.691 }, 00:14:44.691 "base_bdevs_list": [ 00:14:44.691 { 00:14:44.691 "name": "spare", 00:14:44.691 "uuid": "d4dd54d3-5503-5852-a062-386c67b17049", 00:14:44.691 "is_configured": true, 00:14:44.691 "data_offset": 0, 00:14:44.691 "data_size": 65536 00:14:44.691 }, 00:14:44.691 { 00:14:44.691 "name": "BaseBdev2", 00:14:44.691 "uuid": "7e0b340c-b86a-5a42-8a26-91f3e3132179", 00:14:44.691 "is_configured": true, 00:14:44.691 "data_offset": 0, 00:14:44.691 "data_size": 65536 00:14:44.691 }, 00:14:44.691 { 00:14:44.691 "name": "BaseBdev3", 00:14:44.691 "uuid": "4a25d4bd-a763-5a04-8997-cda4ec68e464", 00:14:44.691 "is_configured": true, 00:14:44.691 "data_offset": 0, 00:14:44.691 "data_size": 65536 00:14:44.691 }, 00:14:44.691 { 00:14:44.691 "name": "BaseBdev4", 00:14:44.691 "uuid": "a849b3dc-46a9-5195-b488-8bcfe3d524a5", 00:14:44.691 "is_configured": true, 00:14:44.691 "data_offset": 0, 00:14:44.691 "data_size": 65536 00:14:44.691 } 00:14:44.691 ] 00:14:44.691 }' 00:14:44.691 16:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.691 16:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:44.691 16:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.691 16:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:44.691 16:27:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:45.635 16:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:45.635 16:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.635 16:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.635 16:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.635 16:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.635 16:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.635 16:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.635 16:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.635 16:27:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.635 16:27:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.635 16:27:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.635 16:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.635 "name": "raid_bdev1", 00:14:45.635 "uuid": "0ea17aed-2cef-4e6d-b20c-4909e397cb38", 00:14:45.635 "strip_size_kb": 64, 00:14:45.635 "state": "online", 00:14:45.635 "raid_level": "raid5f", 00:14:45.635 "superblock": false, 00:14:45.635 "num_base_bdevs": 4, 00:14:45.635 "num_base_bdevs_discovered": 4, 00:14:45.635 "num_base_bdevs_operational": 4, 00:14:45.635 "process": { 00:14:45.635 "type": "rebuild", 00:14:45.635 "target": "spare", 00:14:45.635 "progress": { 00:14:45.635 "blocks": 130560, 00:14:45.635 "percent": 66 00:14:45.635 } 00:14:45.635 }, 00:14:45.635 "base_bdevs_list": [ 00:14:45.635 { 00:14:45.635 "name": "spare", 00:14:45.635 "uuid": "d4dd54d3-5503-5852-a062-386c67b17049", 00:14:45.635 "is_configured": true, 00:14:45.635 "data_offset": 0, 00:14:45.635 "data_size": 65536 00:14:45.635 }, 00:14:45.635 { 00:14:45.635 "name": "BaseBdev2", 00:14:45.635 "uuid": "7e0b340c-b86a-5a42-8a26-91f3e3132179", 00:14:45.635 "is_configured": true, 00:14:45.635 "data_offset": 0, 00:14:45.635 "data_size": 65536 00:14:45.635 }, 00:14:45.635 { 00:14:45.635 "name": "BaseBdev3", 00:14:45.635 "uuid": "4a25d4bd-a763-5a04-8997-cda4ec68e464", 00:14:45.635 "is_configured": true, 00:14:45.635 "data_offset": 0, 00:14:45.635 "data_size": 65536 00:14:45.635 }, 00:14:45.635 { 00:14:45.635 "name": "BaseBdev4", 00:14:45.635 "uuid": "a849b3dc-46a9-5195-b488-8bcfe3d524a5", 00:14:45.635 "is_configured": true, 00:14:45.635 "data_offset": 0, 00:14:45.635 "data_size": 65536 00:14:45.635 } 00:14:45.635 ] 00:14:45.635 }' 00:14:45.635 16:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.635 16:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.635 16:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.635 16:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.635 16:27:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:46.576 16:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:46.576 16:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.576 16:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.576 16:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.576 16:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.576 16:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.576 16:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.576 16:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.576 16:27:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.576 16:27:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.576 16:27:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.837 16:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.837 "name": "raid_bdev1", 00:14:46.837 "uuid": "0ea17aed-2cef-4e6d-b20c-4909e397cb38", 00:14:46.837 "strip_size_kb": 64, 00:14:46.837 "state": "online", 00:14:46.837 "raid_level": "raid5f", 00:14:46.837 "superblock": false, 00:14:46.837 "num_base_bdevs": 4, 00:14:46.837 "num_base_bdevs_discovered": 4, 00:14:46.837 "num_base_bdevs_operational": 4, 00:14:46.837 "process": { 00:14:46.837 "type": "rebuild", 00:14:46.837 "target": "spare", 00:14:46.837 "progress": { 00:14:46.837 "blocks": 153600, 00:14:46.837 "percent": 78 00:14:46.837 } 00:14:46.837 }, 00:14:46.837 "base_bdevs_list": [ 00:14:46.837 { 00:14:46.837 "name": "spare", 00:14:46.837 "uuid": "d4dd54d3-5503-5852-a062-386c67b17049", 00:14:46.837 "is_configured": true, 00:14:46.837 "data_offset": 0, 00:14:46.837 "data_size": 65536 00:14:46.837 }, 00:14:46.837 { 00:14:46.837 "name": "BaseBdev2", 00:14:46.837 "uuid": "7e0b340c-b86a-5a42-8a26-91f3e3132179", 00:14:46.837 "is_configured": true, 00:14:46.837 "data_offset": 0, 00:14:46.837 "data_size": 65536 00:14:46.837 }, 00:14:46.837 { 00:14:46.837 "name": "BaseBdev3", 00:14:46.837 "uuid": "4a25d4bd-a763-5a04-8997-cda4ec68e464", 00:14:46.837 "is_configured": true, 00:14:46.837 "data_offset": 0, 00:14:46.837 "data_size": 65536 00:14:46.837 }, 00:14:46.837 { 00:14:46.837 "name": "BaseBdev4", 00:14:46.837 "uuid": "a849b3dc-46a9-5195-b488-8bcfe3d524a5", 00:14:46.837 "is_configured": true, 00:14:46.837 "data_offset": 0, 00:14:46.837 "data_size": 65536 00:14:46.837 } 00:14:46.837 ] 00:14:46.837 }' 00:14:46.837 16:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.837 16:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.837 16:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.837 16:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.837 16:27:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:47.775 16:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:47.776 16:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.776 16:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.776 16:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.776 16:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.776 16:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.776 16:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.776 16:27:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.776 16:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.776 16:27:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.776 16:27:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.776 16:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.776 "name": "raid_bdev1", 00:14:47.776 "uuid": "0ea17aed-2cef-4e6d-b20c-4909e397cb38", 00:14:47.776 "strip_size_kb": 64, 00:14:47.776 "state": "online", 00:14:47.776 "raid_level": "raid5f", 00:14:47.776 "superblock": false, 00:14:47.776 "num_base_bdevs": 4, 00:14:47.776 "num_base_bdevs_discovered": 4, 00:14:47.776 "num_base_bdevs_operational": 4, 00:14:47.776 "process": { 00:14:47.776 "type": "rebuild", 00:14:47.776 "target": "spare", 00:14:47.776 "progress": { 00:14:47.776 "blocks": 174720, 00:14:47.776 "percent": 88 00:14:47.776 } 00:14:47.776 }, 00:14:47.776 "base_bdevs_list": [ 00:14:47.776 { 00:14:47.776 "name": "spare", 00:14:47.776 "uuid": "d4dd54d3-5503-5852-a062-386c67b17049", 00:14:47.776 "is_configured": true, 00:14:47.776 "data_offset": 0, 00:14:47.776 "data_size": 65536 00:14:47.776 }, 00:14:47.776 { 00:14:47.776 "name": "BaseBdev2", 00:14:47.776 "uuid": "7e0b340c-b86a-5a42-8a26-91f3e3132179", 00:14:47.776 "is_configured": true, 00:14:47.776 "data_offset": 0, 00:14:47.776 "data_size": 65536 00:14:47.776 }, 00:14:47.776 { 00:14:47.776 "name": "BaseBdev3", 00:14:47.776 "uuid": "4a25d4bd-a763-5a04-8997-cda4ec68e464", 00:14:47.776 "is_configured": true, 00:14:47.776 "data_offset": 0, 00:14:47.776 "data_size": 65536 00:14:47.776 }, 00:14:47.776 { 00:14:47.776 "name": "BaseBdev4", 00:14:47.776 "uuid": "a849b3dc-46a9-5195-b488-8bcfe3d524a5", 00:14:47.776 "is_configured": true, 00:14:47.776 "data_offset": 0, 00:14:47.776 "data_size": 65536 00:14:47.776 } 00:14:47.776 ] 00:14:47.776 }' 00:14:47.776 16:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.035 16:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.035 16:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.035 16:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.035 16:27:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:48.976 [2024-11-28 16:27:40.595525] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:48.976 [2024-11-28 16:27:40.595607] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:48.976 [2024-11-28 16:27:40.595645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.976 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:48.976 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.976 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.976 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.976 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.976 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.976 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.976 16:27:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.976 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.976 16:27:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.976 16:27:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.976 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.976 "name": "raid_bdev1", 00:14:48.976 "uuid": "0ea17aed-2cef-4e6d-b20c-4909e397cb38", 00:14:48.976 "strip_size_kb": 64, 00:14:48.976 "state": "online", 00:14:48.976 "raid_level": "raid5f", 00:14:48.976 "superblock": false, 00:14:48.976 "num_base_bdevs": 4, 00:14:48.976 "num_base_bdevs_discovered": 4, 00:14:48.976 "num_base_bdevs_operational": 4, 00:14:48.976 "base_bdevs_list": [ 00:14:48.976 { 00:14:48.976 "name": "spare", 00:14:48.976 "uuid": "d4dd54d3-5503-5852-a062-386c67b17049", 00:14:48.976 "is_configured": true, 00:14:48.976 "data_offset": 0, 00:14:48.976 "data_size": 65536 00:14:48.976 }, 00:14:48.976 { 00:14:48.976 "name": "BaseBdev2", 00:14:48.976 "uuid": "7e0b340c-b86a-5a42-8a26-91f3e3132179", 00:14:48.976 "is_configured": true, 00:14:48.976 "data_offset": 0, 00:14:48.976 "data_size": 65536 00:14:48.976 }, 00:14:48.976 { 00:14:48.976 "name": "BaseBdev3", 00:14:48.976 "uuid": "4a25d4bd-a763-5a04-8997-cda4ec68e464", 00:14:48.976 "is_configured": true, 00:14:48.976 "data_offset": 0, 00:14:48.976 "data_size": 65536 00:14:48.976 }, 00:14:48.976 { 00:14:48.976 "name": "BaseBdev4", 00:14:48.976 "uuid": "a849b3dc-46a9-5195-b488-8bcfe3d524a5", 00:14:48.976 "is_configured": true, 00:14:48.976 "data_offset": 0, 00:14:48.976 "data_size": 65536 00:14:48.976 } 00:14:48.976 ] 00:14:48.976 }' 00:14:48.976 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.976 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:48.976 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.236 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.237 "name": "raid_bdev1", 00:14:49.237 "uuid": "0ea17aed-2cef-4e6d-b20c-4909e397cb38", 00:14:49.237 "strip_size_kb": 64, 00:14:49.237 "state": "online", 00:14:49.237 "raid_level": "raid5f", 00:14:49.237 "superblock": false, 00:14:49.237 "num_base_bdevs": 4, 00:14:49.237 "num_base_bdevs_discovered": 4, 00:14:49.237 "num_base_bdevs_operational": 4, 00:14:49.237 "base_bdevs_list": [ 00:14:49.237 { 00:14:49.237 "name": "spare", 00:14:49.237 "uuid": "d4dd54d3-5503-5852-a062-386c67b17049", 00:14:49.237 "is_configured": true, 00:14:49.237 "data_offset": 0, 00:14:49.237 "data_size": 65536 00:14:49.237 }, 00:14:49.237 { 00:14:49.237 "name": "BaseBdev2", 00:14:49.237 "uuid": "7e0b340c-b86a-5a42-8a26-91f3e3132179", 00:14:49.237 "is_configured": true, 00:14:49.237 "data_offset": 0, 00:14:49.237 "data_size": 65536 00:14:49.237 }, 00:14:49.237 { 00:14:49.237 "name": "BaseBdev3", 00:14:49.237 "uuid": "4a25d4bd-a763-5a04-8997-cda4ec68e464", 00:14:49.237 "is_configured": true, 00:14:49.237 "data_offset": 0, 00:14:49.237 "data_size": 65536 00:14:49.237 }, 00:14:49.237 { 00:14:49.237 "name": "BaseBdev4", 00:14:49.237 "uuid": "a849b3dc-46a9-5195-b488-8bcfe3d524a5", 00:14:49.237 "is_configured": true, 00:14:49.237 "data_offset": 0, 00:14:49.237 "data_size": 65536 00:14:49.237 } 00:14:49.237 ] 00:14:49.237 }' 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.237 "name": "raid_bdev1", 00:14:49.237 "uuid": "0ea17aed-2cef-4e6d-b20c-4909e397cb38", 00:14:49.237 "strip_size_kb": 64, 00:14:49.237 "state": "online", 00:14:49.237 "raid_level": "raid5f", 00:14:49.237 "superblock": false, 00:14:49.237 "num_base_bdevs": 4, 00:14:49.237 "num_base_bdevs_discovered": 4, 00:14:49.237 "num_base_bdevs_operational": 4, 00:14:49.237 "base_bdevs_list": [ 00:14:49.237 { 00:14:49.237 "name": "spare", 00:14:49.237 "uuid": "d4dd54d3-5503-5852-a062-386c67b17049", 00:14:49.237 "is_configured": true, 00:14:49.237 "data_offset": 0, 00:14:49.237 "data_size": 65536 00:14:49.237 }, 00:14:49.237 { 00:14:49.237 "name": "BaseBdev2", 00:14:49.237 "uuid": "7e0b340c-b86a-5a42-8a26-91f3e3132179", 00:14:49.237 "is_configured": true, 00:14:49.237 "data_offset": 0, 00:14:49.237 "data_size": 65536 00:14:49.237 }, 00:14:49.237 { 00:14:49.237 "name": "BaseBdev3", 00:14:49.237 "uuid": "4a25d4bd-a763-5a04-8997-cda4ec68e464", 00:14:49.237 "is_configured": true, 00:14:49.237 "data_offset": 0, 00:14:49.237 "data_size": 65536 00:14:49.237 }, 00:14:49.237 { 00:14:49.237 "name": "BaseBdev4", 00:14:49.237 "uuid": "a849b3dc-46a9-5195-b488-8bcfe3d524a5", 00:14:49.237 "is_configured": true, 00:14:49.237 "data_offset": 0, 00:14:49.237 "data_size": 65536 00:14:49.237 } 00:14:49.237 ] 00:14:49.237 }' 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.237 16:27:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.808 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:49.808 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.808 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.808 [2024-11-28 16:27:41.379330] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:49.808 [2024-11-28 16:27:41.379366] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:49.808 [2024-11-28 16:27:41.379445] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:49.808 [2024-11-28 16:27:41.379547] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:49.808 [2024-11-28 16:27:41.379566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:49.808 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.808 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.808 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:49.808 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.808 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.808 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.808 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:49.808 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:49.808 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:49.808 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:49.808 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:49.808 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:49.808 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:49.808 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:49.808 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:49.808 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:49.808 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:49.808 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:49.808 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:50.068 /dev/nbd0 00:14:50.068 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:50.068 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:50.068 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:50.068 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:50.068 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:50.068 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:50.068 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:50.068 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:50.068 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:50.068 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:50.068 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:50.068 1+0 records in 00:14:50.068 1+0 records out 00:14:50.068 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381312 s, 10.7 MB/s 00:14:50.068 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.068 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:50.068 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.068 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:50.068 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:50.068 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:50.068 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:50.068 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:50.328 /dev/nbd1 00:14:50.328 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:50.328 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:50.328 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:50.328 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:50.328 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:50.328 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:50.328 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:50.328 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:50.328 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:50.328 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:50.328 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:50.328 1+0 records in 00:14:50.328 1+0 records out 00:14:50.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049242 s, 8.3 MB/s 00:14:50.328 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.328 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:50.329 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.329 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:50.329 16:27:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:50.329 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:50.329 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:50.329 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:50.329 16:27:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:50.329 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:50.329 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:50.329 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:50.329 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:50.329 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:50.329 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:50.588 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:50.588 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:50.588 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:50.588 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:50.588 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:50.588 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:50.588 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:50.588 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:50.588 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:50.588 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:50.848 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:50.848 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:50.848 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:50.848 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:50.848 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:50.848 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:50.848 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:50.848 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:50.848 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:50.848 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 95033 00:14:50.848 16:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 95033 ']' 00:14:50.848 16:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 95033 00:14:50.848 16:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:14:50.848 16:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:50.848 16:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95033 00:14:50.848 16:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:50.848 16:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:50.848 killing process with pid 95033 00:14:50.848 16:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95033' 00:14:50.848 16:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 95033 00:14:50.848 Received shutdown signal, test time was about 60.000000 seconds 00:14:50.848 00:14:50.848 Latency(us) 00:14:50.848 [2024-11-28T16:27:42.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.848 [2024-11-28T16:27:42.619Z] =================================================================================================================== 00:14:50.848 [2024-11-28T16:27:42.619Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:50.848 [2024-11-28 16:27:42.436457] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:50.848 16:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 95033 00:14:50.848 [2024-11-28 16:27:42.487705] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:51.108 16:27:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:51.108 00:14:51.108 real 0m17.593s 00:14:51.108 user 0m21.236s 00:14:51.109 sys 0m2.675s 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.109 ************************************ 00:14:51.109 END TEST raid5f_rebuild_test 00:14:51.109 ************************************ 00:14:51.109 16:27:42 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:14:51.109 16:27:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:51.109 16:27:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:51.109 16:27:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:51.109 ************************************ 00:14:51.109 START TEST raid5f_rebuild_test_sb 00:14:51.109 ************************************ 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=95520 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 95520 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 95520 ']' 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:51.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:51.109 16:27:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.369 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:51.369 Zero copy mechanism will not be used. 00:14:51.369 [2024-11-28 16:27:42.889481] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:51.369 [2024-11-28 16:27:42.889631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95520 ] 00:14:51.369 [2024-11-28 16:27:43.035633] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.369 [2024-11-28 16:27:43.078642] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.369 [2024-11-28 16:27:43.121625] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.369 [2024-11-28 16:27:43.121664] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.939 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:51.939 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:51.939 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:51.939 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:51.939 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.939 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.200 BaseBdev1_malloc 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.200 [2024-11-28 16:27:43.728178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:52.200 [2024-11-28 16:27:43.728247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.200 [2024-11-28 16:27:43.728282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:52.200 [2024-11-28 16:27:43.728299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.200 [2024-11-28 16:27:43.730344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.200 [2024-11-28 16:27:43.730381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:52.200 BaseBdev1 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.200 BaseBdev2_malloc 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.200 [2024-11-28 16:27:43.766745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:52.200 [2024-11-28 16:27:43.766804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.200 [2024-11-28 16:27:43.766826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:52.200 [2024-11-28 16:27:43.766848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.200 [2024-11-28 16:27:43.769102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.200 [2024-11-28 16:27:43.769144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:52.200 BaseBdev2 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.200 BaseBdev3_malloc 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.200 [2024-11-28 16:27:43.795318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:52.200 [2024-11-28 16:27:43.795381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.200 [2024-11-28 16:27:43.795403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:52.200 [2024-11-28 16:27:43.795412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.200 [2024-11-28 16:27:43.797387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.200 [2024-11-28 16:27:43.797421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:52.200 BaseBdev3 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.200 BaseBdev4_malloc 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.200 [2024-11-28 16:27:43.823816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:52.200 [2024-11-28 16:27:43.823878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.200 [2024-11-28 16:27:43.823926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:52.200 [2024-11-28 16:27:43.823935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.200 [2024-11-28 16:27:43.825945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.200 [2024-11-28 16:27:43.825981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:52.200 BaseBdev4 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.200 spare_malloc 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.200 spare_delay 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:52.200 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.201 [2024-11-28 16:27:43.864419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:52.201 [2024-11-28 16:27:43.864471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.201 [2024-11-28 16:27:43.864491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:52.201 [2024-11-28 16:27:43.864499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.201 [2024-11-28 16:27:43.866480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.201 [2024-11-28 16:27:43.866516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:52.201 spare 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.201 [2024-11-28 16:27:43.876482] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.201 [2024-11-28 16:27:43.878276] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.201 [2024-11-28 16:27:43.878338] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:52.201 [2024-11-28 16:27:43.878376] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:52.201 [2024-11-28 16:27:43.878536] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:52.201 [2024-11-28 16:27:43.878551] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:52.201 [2024-11-28 16:27:43.878787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:52.201 [2024-11-28 16:27:43.879227] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:52.201 [2024-11-28 16:27:43.879245] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:52.201 [2024-11-28 16:27:43.879358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.201 "name": "raid_bdev1", 00:14:52.201 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:14:52.201 "strip_size_kb": 64, 00:14:52.201 "state": "online", 00:14:52.201 "raid_level": "raid5f", 00:14:52.201 "superblock": true, 00:14:52.201 "num_base_bdevs": 4, 00:14:52.201 "num_base_bdevs_discovered": 4, 00:14:52.201 "num_base_bdevs_operational": 4, 00:14:52.201 "base_bdevs_list": [ 00:14:52.201 { 00:14:52.201 "name": "BaseBdev1", 00:14:52.201 "uuid": "d5a16b80-a37c-52cf-84cf-8ef4033fa8c6", 00:14:52.201 "is_configured": true, 00:14:52.201 "data_offset": 2048, 00:14:52.201 "data_size": 63488 00:14:52.201 }, 00:14:52.201 { 00:14:52.201 "name": "BaseBdev2", 00:14:52.201 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:14:52.201 "is_configured": true, 00:14:52.201 "data_offset": 2048, 00:14:52.201 "data_size": 63488 00:14:52.201 }, 00:14:52.201 { 00:14:52.201 "name": "BaseBdev3", 00:14:52.201 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:14:52.201 "is_configured": true, 00:14:52.201 "data_offset": 2048, 00:14:52.201 "data_size": 63488 00:14:52.201 }, 00:14:52.201 { 00:14:52.201 "name": "BaseBdev4", 00:14:52.201 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:14:52.201 "is_configured": true, 00:14:52.201 "data_offset": 2048, 00:14:52.201 "data_size": 63488 00:14:52.201 } 00:14:52.201 ] 00:14:52.201 }' 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.201 16:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.771 [2024-11-28 16:27:44.356497] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:52.771 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:53.031 [2024-11-28 16:27:44.612048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:53.031 /dev/nbd0 00:14:53.031 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:53.031 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:53.031 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:53.031 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:53.031 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:53.031 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:53.031 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:53.031 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:53.031 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:53.031 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:53.031 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:53.031 1+0 records in 00:14:53.031 1+0 records out 00:14:53.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458839 s, 8.9 MB/s 00:14:53.031 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.031 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:53.031 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.031 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:53.031 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:53.031 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:53.031 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:53.031 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:53.031 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:53.031 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:53.031 16:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:14:53.291 496+0 records in 00:14:53.291 496+0 records out 00:14:53.291 97517568 bytes (98 MB, 93 MiB) copied, 0.381972 s, 255 MB/s 00:14:53.291 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:53.292 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:53.553 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:53.553 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:53.554 [2024-11-28 16:27:45.263937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.554 [2024-11-28 16:27:45.279987] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.554 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.813 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.813 "name": "raid_bdev1", 00:14:53.813 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:14:53.813 "strip_size_kb": 64, 00:14:53.813 "state": "online", 00:14:53.813 "raid_level": "raid5f", 00:14:53.813 "superblock": true, 00:14:53.813 "num_base_bdevs": 4, 00:14:53.813 "num_base_bdevs_discovered": 3, 00:14:53.813 "num_base_bdevs_operational": 3, 00:14:53.813 "base_bdevs_list": [ 00:14:53.813 { 00:14:53.813 "name": null, 00:14:53.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.813 "is_configured": false, 00:14:53.813 "data_offset": 0, 00:14:53.813 "data_size": 63488 00:14:53.813 }, 00:14:53.813 { 00:14:53.813 "name": "BaseBdev2", 00:14:53.813 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:14:53.813 "is_configured": true, 00:14:53.813 "data_offset": 2048, 00:14:53.813 "data_size": 63488 00:14:53.813 }, 00:14:53.813 { 00:14:53.813 "name": "BaseBdev3", 00:14:53.813 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:14:53.813 "is_configured": true, 00:14:53.813 "data_offset": 2048, 00:14:53.813 "data_size": 63488 00:14:53.813 }, 00:14:53.813 { 00:14:53.813 "name": "BaseBdev4", 00:14:53.813 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:14:53.813 "is_configured": true, 00:14:53.813 "data_offset": 2048, 00:14:53.813 "data_size": 63488 00:14:53.813 } 00:14:53.813 ] 00:14:53.813 }' 00:14:53.813 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.813 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.073 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:54.073 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.073 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.073 [2024-11-28 16:27:45.747188] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:54.073 [2024-11-28 16:27:45.750635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:14:54.073 [2024-11-28 16:27:45.752809] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:54.073 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.073 16:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:55.044 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.044 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.044 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.044 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.044 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.044 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.044 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.044 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.044 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.044 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.321 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.321 "name": "raid_bdev1", 00:14:55.321 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:14:55.321 "strip_size_kb": 64, 00:14:55.321 "state": "online", 00:14:55.321 "raid_level": "raid5f", 00:14:55.321 "superblock": true, 00:14:55.321 "num_base_bdevs": 4, 00:14:55.321 "num_base_bdevs_discovered": 4, 00:14:55.321 "num_base_bdevs_operational": 4, 00:14:55.321 "process": { 00:14:55.321 "type": "rebuild", 00:14:55.321 "target": "spare", 00:14:55.321 "progress": { 00:14:55.321 "blocks": 19200, 00:14:55.321 "percent": 10 00:14:55.321 } 00:14:55.321 }, 00:14:55.321 "base_bdevs_list": [ 00:14:55.321 { 00:14:55.321 "name": "spare", 00:14:55.321 "uuid": "1412193a-164a-5e5e-89be-0fd452ad87a3", 00:14:55.321 "is_configured": true, 00:14:55.321 "data_offset": 2048, 00:14:55.321 "data_size": 63488 00:14:55.321 }, 00:14:55.321 { 00:14:55.321 "name": "BaseBdev2", 00:14:55.321 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:14:55.321 "is_configured": true, 00:14:55.321 "data_offset": 2048, 00:14:55.321 "data_size": 63488 00:14:55.321 }, 00:14:55.321 { 00:14:55.321 "name": "BaseBdev3", 00:14:55.321 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:14:55.321 "is_configured": true, 00:14:55.321 "data_offset": 2048, 00:14:55.321 "data_size": 63488 00:14:55.321 }, 00:14:55.321 { 00:14:55.321 "name": "BaseBdev4", 00:14:55.321 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:14:55.321 "is_configured": true, 00:14:55.321 "data_offset": 2048, 00:14:55.321 "data_size": 63488 00:14:55.321 } 00:14:55.321 ] 00:14:55.321 }' 00:14:55.321 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.321 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.322 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.322 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.322 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:55.322 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.322 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.322 [2024-11-28 16:27:46.911655] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:55.322 [2024-11-28 16:27:46.958316] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:55.322 [2024-11-28 16:27:46.958381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.322 [2024-11-28 16:27:46.958398] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:55.322 [2024-11-28 16:27:46.958406] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:55.322 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.322 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:55.322 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.322 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.322 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.322 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.322 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.322 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.322 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.322 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.322 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.322 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.322 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.322 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.322 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.322 16:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.322 16:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.322 "name": "raid_bdev1", 00:14:55.322 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:14:55.322 "strip_size_kb": 64, 00:14:55.322 "state": "online", 00:14:55.322 "raid_level": "raid5f", 00:14:55.322 "superblock": true, 00:14:55.322 "num_base_bdevs": 4, 00:14:55.322 "num_base_bdevs_discovered": 3, 00:14:55.322 "num_base_bdevs_operational": 3, 00:14:55.322 "base_bdevs_list": [ 00:14:55.322 { 00:14:55.322 "name": null, 00:14:55.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.322 "is_configured": false, 00:14:55.322 "data_offset": 0, 00:14:55.322 "data_size": 63488 00:14:55.322 }, 00:14:55.322 { 00:14:55.322 "name": "BaseBdev2", 00:14:55.322 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:14:55.322 "is_configured": true, 00:14:55.322 "data_offset": 2048, 00:14:55.322 "data_size": 63488 00:14:55.322 }, 00:14:55.322 { 00:14:55.322 "name": "BaseBdev3", 00:14:55.322 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:14:55.322 "is_configured": true, 00:14:55.322 "data_offset": 2048, 00:14:55.322 "data_size": 63488 00:14:55.322 }, 00:14:55.322 { 00:14:55.322 "name": "BaseBdev4", 00:14:55.322 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:14:55.322 "is_configured": true, 00:14:55.322 "data_offset": 2048, 00:14:55.322 "data_size": 63488 00:14:55.322 } 00:14:55.322 ] 00:14:55.322 }' 00:14:55.322 16:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.322 16:27:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.907 16:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:55.907 16:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.907 16:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:55.907 16:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:55.907 16:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.907 16:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.907 16:27:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.907 16:27:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.907 16:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.907 16:27:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.907 16:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.907 "name": "raid_bdev1", 00:14:55.907 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:14:55.907 "strip_size_kb": 64, 00:14:55.907 "state": "online", 00:14:55.907 "raid_level": "raid5f", 00:14:55.907 "superblock": true, 00:14:55.907 "num_base_bdevs": 4, 00:14:55.907 "num_base_bdevs_discovered": 3, 00:14:55.907 "num_base_bdevs_operational": 3, 00:14:55.907 "base_bdevs_list": [ 00:14:55.907 { 00:14:55.907 "name": null, 00:14:55.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.907 "is_configured": false, 00:14:55.907 "data_offset": 0, 00:14:55.907 "data_size": 63488 00:14:55.907 }, 00:14:55.907 { 00:14:55.907 "name": "BaseBdev2", 00:14:55.907 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:14:55.907 "is_configured": true, 00:14:55.907 "data_offset": 2048, 00:14:55.907 "data_size": 63488 00:14:55.907 }, 00:14:55.907 { 00:14:55.907 "name": "BaseBdev3", 00:14:55.907 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:14:55.907 "is_configured": true, 00:14:55.907 "data_offset": 2048, 00:14:55.907 "data_size": 63488 00:14:55.907 }, 00:14:55.907 { 00:14:55.907 "name": "BaseBdev4", 00:14:55.907 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:14:55.907 "is_configured": true, 00:14:55.907 "data_offset": 2048, 00:14:55.907 "data_size": 63488 00:14:55.907 } 00:14:55.907 ] 00:14:55.907 }' 00:14:55.907 16:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.907 16:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:55.907 16:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.907 16:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:55.907 16:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:55.907 16:27:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.907 16:27:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.907 [2024-11-28 16:27:47.582609] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:55.907 [2024-11-28 16:27:47.585646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a980 00:14:55.907 [2024-11-28 16:27:47.587884] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:55.907 16:27:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.907 16:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:56.847 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.847 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.847 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.847 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.847 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.847 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.847 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.847 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.847 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.107 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.108 "name": "raid_bdev1", 00:14:57.108 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:14:57.108 "strip_size_kb": 64, 00:14:57.108 "state": "online", 00:14:57.108 "raid_level": "raid5f", 00:14:57.108 "superblock": true, 00:14:57.108 "num_base_bdevs": 4, 00:14:57.108 "num_base_bdevs_discovered": 4, 00:14:57.108 "num_base_bdevs_operational": 4, 00:14:57.108 "process": { 00:14:57.108 "type": "rebuild", 00:14:57.108 "target": "spare", 00:14:57.108 "progress": { 00:14:57.108 "blocks": 19200, 00:14:57.108 "percent": 10 00:14:57.108 } 00:14:57.108 }, 00:14:57.108 "base_bdevs_list": [ 00:14:57.108 { 00:14:57.108 "name": "spare", 00:14:57.108 "uuid": "1412193a-164a-5e5e-89be-0fd452ad87a3", 00:14:57.108 "is_configured": true, 00:14:57.108 "data_offset": 2048, 00:14:57.108 "data_size": 63488 00:14:57.108 }, 00:14:57.108 { 00:14:57.108 "name": "BaseBdev2", 00:14:57.108 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:14:57.108 "is_configured": true, 00:14:57.108 "data_offset": 2048, 00:14:57.108 "data_size": 63488 00:14:57.108 }, 00:14:57.108 { 00:14:57.108 "name": "BaseBdev3", 00:14:57.108 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:14:57.108 "is_configured": true, 00:14:57.108 "data_offset": 2048, 00:14:57.108 "data_size": 63488 00:14:57.108 }, 00:14:57.108 { 00:14:57.108 "name": "BaseBdev4", 00:14:57.108 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:14:57.108 "is_configured": true, 00:14:57.108 "data_offset": 2048, 00:14:57.108 "data_size": 63488 00:14:57.108 } 00:14:57.108 ] 00:14:57.108 }' 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:57.108 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=527 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.108 "name": "raid_bdev1", 00:14:57.108 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:14:57.108 "strip_size_kb": 64, 00:14:57.108 "state": "online", 00:14:57.108 "raid_level": "raid5f", 00:14:57.108 "superblock": true, 00:14:57.108 "num_base_bdevs": 4, 00:14:57.108 "num_base_bdevs_discovered": 4, 00:14:57.108 "num_base_bdevs_operational": 4, 00:14:57.108 "process": { 00:14:57.108 "type": "rebuild", 00:14:57.108 "target": "spare", 00:14:57.108 "progress": { 00:14:57.108 "blocks": 21120, 00:14:57.108 "percent": 11 00:14:57.108 } 00:14:57.108 }, 00:14:57.108 "base_bdevs_list": [ 00:14:57.108 { 00:14:57.108 "name": "spare", 00:14:57.108 "uuid": "1412193a-164a-5e5e-89be-0fd452ad87a3", 00:14:57.108 "is_configured": true, 00:14:57.108 "data_offset": 2048, 00:14:57.108 "data_size": 63488 00:14:57.108 }, 00:14:57.108 { 00:14:57.108 "name": "BaseBdev2", 00:14:57.108 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:14:57.108 "is_configured": true, 00:14:57.108 "data_offset": 2048, 00:14:57.108 "data_size": 63488 00:14:57.108 }, 00:14:57.108 { 00:14:57.108 "name": "BaseBdev3", 00:14:57.108 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:14:57.108 "is_configured": true, 00:14:57.108 "data_offset": 2048, 00:14:57.108 "data_size": 63488 00:14:57.108 }, 00:14:57.108 { 00:14:57.108 "name": "BaseBdev4", 00:14:57.108 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:14:57.108 "is_configured": true, 00:14:57.108 "data_offset": 2048, 00:14:57.108 "data_size": 63488 00:14:57.108 } 00:14:57.108 ] 00:14:57.108 }' 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:57.108 16:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:58.490 16:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:58.490 16:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.490 16:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.490 16:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.490 16:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.490 16:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.490 16:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.490 16:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.490 16:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.490 16:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.490 16:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.490 16:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.490 "name": "raid_bdev1", 00:14:58.490 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:14:58.490 "strip_size_kb": 64, 00:14:58.490 "state": "online", 00:14:58.490 "raid_level": "raid5f", 00:14:58.490 "superblock": true, 00:14:58.490 "num_base_bdevs": 4, 00:14:58.490 "num_base_bdevs_discovered": 4, 00:14:58.490 "num_base_bdevs_operational": 4, 00:14:58.490 "process": { 00:14:58.490 "type": "rebuild", 00:14:58.490 "target": "spare", 00:14:58.490 "progress": { 00:14:58.490 "blocks": 42240, 00:14:58.490 "percent": 22 00:14:58.490 } 00:14:58.490 }, 00:14:58.490 "base_bdevs_list": [ 00:14:58.490 { 00:14:58.490 "name": "spare", 00:14:58.490 "uuid": "1412193a-164a-5e5e-89be-0fd452ad87a3", 00:14:58.490 "is_configured": true, 00:14:58.490 "data_offset": 2048, 00:14:58.490 "data_size": 63488 00:14:58.490 }, 00:14:58.490 { 00:14:58.490 "name": "BaseBdev2", 00:14:58.490 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:14:58.490 "is_configured": true, 00:14:58.490 "data_offset": 2048, 00:14:58.490 "data_size": 63488 00:14:58.490 }, 00:14:58.490 { 00:14:58.490 "name": "BaseBdev3", 00:14:58.490 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:14:58.490 "is_configured": true, 00:14:58.490 "data_offset": 2048, 00:14:58.490 "data_size": 63488 00:14:58.490 }, 00:14:58.491 { 00:14:58.491 "name": "BaseBdev4", 00:14:58.491 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:14:58.491 "is_configured": true, 00:14:58.491 "data_offset": 2048, 00:14:58.491 "data_size": 63488 00:14:58.491 } 00:14:58.491 ] 00:14:58.491 }' 00:14:58.491 16:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.491 16:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:58.491 16:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.491 16:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:58.491 16:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:59.431 16:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:59.431 16:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:59.431 16:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.431 16:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:59.431 16:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:59.431 16:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.431 16:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.431 16:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.431 16:27:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.431 16:27:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.431 16:27:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.431 16:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.431 "name": "raid_bdev1", 00:14:59.431 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:14:59.431 "strip_size_kb": 64, 00:14:59.431 "state": "online", 00:14:59.431 "raid_level": "raid5f", 00:14:59.431 "superblock": true, 00:14:59.431 "num_base_bdevs": 4, 00:14:59.431 "num_base_bdevs_discovered": 4, 00:14:59.431 "num_base_bdevs_operational": 4, 00:14:59.431 "process": { 00:14:59.431 "type": "rebuild", 00:14:59.431 "target": "spare", 00:14:59.431 "progress": { 00:14:59.431 "blocks": 65280, 00:14:59.431 "percent": 34 00:14:59.431 } 00:14:59.431 }, 00:14:59.431 "base_bdevs_list": [ 00:14:59.431 { 00:14:59.431 "name": "spare", 00:14:59.431 "uuid": "1412193a-164a-5e5e-89be-0fd452ad87a3", 00:14:59.431 "is_configured": true, 00:14:59.431 "data_offset": 2048, 00:14:59.431 "data_size": 63488 00:14:59.431 }, 00:14:59.431 { 00:14:59.431 "name": "BaseBdev2", 00:14:59.431 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:14:59.431 "is_configured": true, 00:14:59.431 "data_offset": 2048, 00:14:59.431 "data_size": 63488 00:14:59.431 }, 00:14:59.431 { 00:14:59.431 "name": "BaseBdev3", 00:14:59.431 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:14:59.431 "is_configured": true, 00:14:59.431 "data_offset": 2048, 00:14:59.431 "data_size": 63488 00:14:59.431 }, 00:14:59.431 { 00:14:59.431 "name": "BaseBdev4", 00:14:59.431 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:14:59.431 "is_configured": true, 00:14:59.431 "data_offset": 2048, 00:14:59.431 "data_size": 63488 00:14:59.431 } 00:14:59.431 ] 00:14:59.431 }' 00:14:59.431 16:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.431 16:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:59.431 16:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.432 16:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:59.432 16:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:00.815 16:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:00.815 16:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.815 16:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.815 16:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.815 16:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.815 16:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.815 16:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.815 16:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.815 16:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.815 16:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.815 16:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.815 16:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.815 "name": "raid_bdev1", 00:15:00.815 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:15:00.815 "strip_size_kb": 64, 00:15:00.815 "state": "online", 00:15:00.815 "raid_level": "raid5f", 00:15:00.815 "superblock": true, 00:15:00.815 "num_base_bdevs": 4, 00:15:00.815 "num_base_bdevs_discovered": 4, 00:15:00.815 "num_base_bdevs_operational": 4, 00:15:00.815 "process": { 00:15:00.815 "type": "rebuild", 00:15:00.815 "target": "spare", 00:15:00.815 "progress": { 00:15:00.815 "blocks": 86400, 00:15:00.815 "percent": 45 00:15:00.816 } 00:15:00.816 }, 00:15:00.816 "base_bdevs_list": [ 00:15:00.816 { 00:15:00.816 "name": "spare", 00:15:00.816 "uuid": "1412193a-164a-5e5e-89be-0fd452ad87a3", 00:15:00.816 "is_configured": true, 00:15:00.816 "data_offset": 2048, 00:15:00.816 "data_size": 63488 00:15:00.816 }, 00:15:00.816 { 00:15:00.816 "name": "BaseBdev2", 00:15:00.816 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:15:00.816 "is_configured": true, 00:15:00.816 "data_offset": 2048, 00:15:00.816 "data_size": 63488 00:15:00.816 }, 00:15:00.816 { 00:15:00.816 "name": "BaseBdev3", 00:15:00.816 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:15:00.816 "is_configured": true, 00:15:00.816 "data_offset": 2048, 00:15:00.816 "data_size": 63488 00:15:00.816 }, 00:15:00.816 { 00:15:00.816 "name": "BaseBdev4", 00:15:00.816 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:15:00.816 "is_configured": true, 00:15:00.816 "data_offset": 2048, 00:15:00.816 "data_size": 63488 00:15:00.816 } 00:15:00.816 ] 00:15:00.816 }' 00:15:00.816 16:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.816 16:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.816 16:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.816 16:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.816 16:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:01.756 16:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:01.756 16:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.756 16:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.756 16:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.757 16:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.757 16:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.757 16:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.757 16:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.757 16:27:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.757 16:27:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.757 16:27:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.757 16:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.757 "name": "raid_bdev1", 00:15:01.757 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:15:01.757 "strip_size_kb": 64, 00:15:01.757 "state": "online", 00:15:01.757 "raid_level": "raid5f", 00:15:01.757 "superblock": true, 00:15:01.757 "num_base_bdevs": 4, 00:15:01.757 "num_base_bdevs_discovered": 4, 00:15:01.757 "num_base_bdevs_operational": 4, 00:15:01.757 "process": { 00:15:01.757 "type": "rebuild", 00:15:01.757 "target": "spare", 00:15:01.757 "progress": { 00:15:01.757 "blocks": 109440, 00:15:01.757 "percent": 57 00:15:01.757 } 00:15:01.757 }, 00:15:01.757 "base_bdevs_list": [ 00:15:01.757 { 00:15:01.757 "name": "spare", 00:15:01.757 "uuid": "1412193a-164a-5e5e-89be-0fd452ad87a3", 00:15:01.757 "is_configured": true, 00:15:01.757 "data_offset": 2048, 00:15:01.757 "data_size": 63488 00:15:01.757 }, 00:15:01.757 { 00:15:01.757 "name": "BaseBdev2", 00:15:01.757 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:15:01.757 "is_configured": true, 00:15:01.757 "data_offset": 2048, 00:15:01.757 "data_size": 63488 00:15:01.757 }, 00:15:01.757 { 00:15:01.757 "name": "BaseBdev3", 00:15:01.757 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:15:01.757 "is_configured": true, 00:15:01.757 "data_offset": 2048, 00:15:01.757 "data_size": 63488 00:15:01.757 }, 00:15:01.757 { 00:15:01.757 "name": "BaseBdev4", 00:15:01.757 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:15:01.757 "is_configured": true, 00:15:01.757 "data_offset": 2048, 00:15:01.757 "data_size": 63488 00:15:01.757 } 00:15:01.757 ] 00:15:01.757 }' 00:15:01.757 16:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.757 16:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.757 16:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.757 16:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.757 16:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:02.697 16:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.697 16:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.697 16:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.697 16:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.697 16:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.697 16:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.697 16:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.697 16:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.697 16:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.697 16:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.957 16:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.958 16:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.958 "name": "raid_bdev1", 00:15:02.958 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:15:02.958 "strip_size_kb": 64, 00:15:02.958 "state": "online", 00:15:02.958 "raid_level": "raid5f", 00:15:02.958 "superblock": true, 00:15:02.958 "num_base_bdevs": 4, 00:15:02.958 "num_base_bdevs_discovered": 4, 00:15:02.958 "num_base_bdevs_operational": 4, 00:15:02.958 "process": { 00:15:02.958 "type": "rebuild", 00:15:02.958 "target": "spare", 00:15:02.958 "progress": { 00:15:02.958 "blocks": 130560, 00:15:02.958 "percent": 68 00:15:02.958 } 00:15:02.958 }, 00:15:02.958 "base_bdevs_list": [ 00:15:02.958 { 00:15:02.958 "name": "spare", 00:15:02.958 "uuid": "1412193a-164a-5e5e-89be-0fd452ad87a3", 00:15:02.958 "is_configured": true, 00:15:02.958 "data_offset": 2048, 00:15:02.958 "data_size": 63488 00:15:02.958 }, 00:15:02.958 { 00:15:02.958 "name": "BaseBdev2", 00:15:02.958 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:15:02.958 "is_configured": true, 00:15:02.958 "data_offset": 2048, 00:15:02.958 "data_size": 63488 00:15:02.958 }, 00:15:02.958 { 00:15:02.958 "name": "BaseBdev3", 00:15:02.958 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:15:02.958 "is_configured": true, 00:15:02.958 "data_offset": 2048, 00:15:02.958 "data_size": 63488 00:15:02.958 }, 00:15:02.958 { 00:15:02.958 "name": "BaseBdev4", 00:15:02.958 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:15:02.958 "is_configured": true, 00:15:02.958 "data_offset": 2048, 00:15:02.958 "data_size": 63488 00:15:02.958 } 00:15:02.958 ] 00:15:02.958 }' 00:15:02.958 16:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.958 16:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.958 16:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.958 16:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.958 16:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:03.896 16:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:03.896 16:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.896 16:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.896 16:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.896 16:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.896 16:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.896 16:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.896 16:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.896 16:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.896 16:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.896 16:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.896 16:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.896 "name": "raid_bdev1", 00:15:03.896 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:15:03.896 "strip_size_kb": 64, 00:15:03.896 "state": "online", 00:15:03.896 "raid_level": "raid5f", 00:15:03.896 "superblock": true, 00:15:03.896 "num_base_bdevs": 4, 00:15:03.896 "num_base_bdevs_discovered": 4, 00:15:03.896 "num_base_bdevs_operational": 4, 00:15:03.896 "process": { 00:15:03.896 "type": "rebuild", 00:15:03.896 "target": "spare", 00:15:03.896 "progress": { 00:15:03.896 "blocks": 153600, 00:15:03.896 "percent": 80 00:15:03.896 } 00:15:03.896 }, 00:15:03.896 "base_bdevs_list": [ 00:15:03.896 { 00:15:03.896 "name": "spare", 00:15:03.896 "uuid": "1412193a-164a-5e5e-89be-0fd452ad87a3", 00:15:03.896 "is_configured": true, 00:15:03.896 "data_offset": 2048, 00:15:03.896 "data_size": 63488 00:15:03.896 }, 00:15:03.896 { 00:15:03.896 "name": "BaseBdev2", 00:15:03.896 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:15:03.896 "is_configured": true, 00:15:03.896 "data_offset": 2048, 00:15:03.896 "data_size": 63488 00:15:03.896 }, 00:15:03.896 { 00:15:03.896 "name": "BaseBdev3", 00:15:03.896 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:15:03.896 "is_configured": true, 00:15:03.896 "data_offset": 2048, 00:15:03.896 "data_size": 63488 00:15:03.896 }, 00:15:03.896 { 00:15:03.896 "name": "BaseBdev4", 00:15:03.896 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:15:03.896 "is_configured": true, 00:15:03.896 "data_offset": 2048, 00:15:03.896 "data_size": 63488 00:15:03.897 } 00:15:03.897 ] 00:15:03.897 }' 00:15:04.156 16:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.156 16:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.157 16:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.157 16:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.157 16:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:05.097 16:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.097 16:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.097 16:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.097 16:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.097 16:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.097 16:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.097 16:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.097 16:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.097 16:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.097 16:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.097 16:27:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.097 16:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.097 "name": "raid_bdev1", 00:15:05.097 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:15:05.097 "strip_size_kb": 64, 00:15:05.097 "state": "online", 00:15:05.097 "raid_level": "raid5f", 00:15:05.097 "superblock": true, 00:15:05.097 "num_base_bdevs": 4, 00:15:05.097 "num_base_bdevs_discovered": 4, 00:15:05.097 "num_base_bdevs_operational": 4, 00:15:05.097 "process": { 00:15:05.097 "type": "rebuild", 00:15:05.097 "target": "spare", 00:15:05.097 "progress": { 00:15:05.097 "blocks": 174720, 00:15:05.097 "percent": 91 00:15:05.097 } 00:15:05.097 }, 00:15:05.097 "base_bdevs_list": [ 00:15:05.097 { 00:15:05.097 "name": "spare", 00:15:05.097 "uuid": "1412193a-164a-5e5e-89be-0fd452ad87a3", 00:15:05.097 "is_configured": true, 00:15:05.097 "data_offset": 2048, 00:15:05.097 "data_size": 63488 00:15:05.097 }, 00:15:05.097 { 00:15:05.097 "name": "BaseBdev2", 00:15:05.097 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:15:05.097 "is_configured": true, 00:15:05.097 "data_offset": 2048, 00:15:05.097 "data_size": 63488 00:15:05.097 }, 00:15:05.097 { 00:15:05.097 "name": "BaseBdev3", 00:15:05.097 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:15:05.097 "is_configured": true, 00:15:05.097 "data_offset": 2048, 00:15:05.097 "data_size": 63488 00:15:05.097 }, 00:15:05.097 { 00:15:05.097 "name": "BaseBdev4", 00:15:05.097 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:15:05.097 "is_configured": true, 00:15:05.097 "data_offset": 2048, 00:15:05.097 "data_size": 63488 00:15:05.097 } 00:15:05.097 ] 00:15:05.097 }' 00:15:05.097 16:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.097 16:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.097 16:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.357 16:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.357 16:27:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:05.928 [2024-11-28 16:27:57.627344] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:05.928 [2024-11-28 16:27:57.627435] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:05.928 [2024-11-28 16:27:57.628014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.188 16:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.188 16:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.188 16:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.188 16:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.188 16:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.188 16:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.188 16:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.188 16:27:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.188 16:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.188 16:27:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.188 16:27:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.188 16:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.188 "name": "raid_bdev1", 00:15:06.188 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:15:06.188 "strip_size_kb": 64, 00:15:06.188 "state": "online", 00:15:06.188 "raid_level": "raid5f", 00:15:06.188 "superblock": true, 00:15:06.188 "num_base_bdevs": 4, 00:15:06.188 "num_base_bdevs_discovered": 4, 00:15:06.188 "num_base_bdevs_operational": 4, 00:15:06.188 "base_bdevs_list": [ 00:15:06.188 { 00:15:06.188 "name": "spare", 00:15:06.188 "uuid": "1412193a-164a-5e5e-89be-0fd452ad87a3", 00:15:06.188 "is_configured": true, 00:15:06.188 "data_offset": 2048, 00:15:06.188 "data_size": 63488 00:15:06.188 }, 00:15:06.188 { 00:15:06.188 "name": "BaseBdev2", 00:15:06.188 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:15:06.188 "is_configured": true, 00:15:06.188 "data_offset": 2048, 00:15:06.188 "data_size": 63488 00:15:06.188 }, 00:15:06.188 { 00:15:06.188 "name": "BaseBdev3", 00:15:06.188 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:15:06.188 "is_configured": true, 00:15:06.188 "data_offset": 2048, 00:15:06.188 "data_size": 63488 00:15:06.188 }, 00:15:06.188 { 00:15:06.188 "name": "BaseBdev4", 00:15:06.188 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:15:06.188 "is_configured": true, 00:15:06.188 "data_offset": 2048, 00:15:06.188 "data_size": 63488 00:15:06.188 } 00:15:06.188 ] 00:15:06.188 }' 00:15:06.188 16:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.449 16:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:06.449 16:27:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.449 "name": "raid_bdev1", 00:15:06.449 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:15:06.449 "strip_size_kb": 64, 00:15:06.449 "state": "online", 00:15:06.449 "raid_level": "raid5f", 00:15:06.449 "superblock": true, 00:15:06.449 "num_base_bdevs": 4, 00:15:06.449 "num_base_bdevs_discovered": 4, 00:15:06.449 "num_base_bdevs_operational": 4, 00:15:06.449 "base_bdevs_list": [ 00:15:06.449 { 00:15:06.449 "name": "spare", 00:15:06.449 "uuid": "1412193a-164a-5e5e-89be-0fd452ad87a3", 00:15:06.449 "is_configured": true, 00:15:06.449 "data_offset": 2048, 00:15:06.449 "data_size": 63488 00:15:06.449 }, 00:15:06.449 { 00:15:06.449 "name": "BaseBdev2", 00:15:06.449 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:15:06.449 "is_configured": true, 00:15:06.449 "data_offset": 2048, 00:15:06.449 "data_size": 63488 00:15:06.449 }, 00:15:06.449 { 00:15:06.449 "name": "BaseBdev3", 00:15:06.449 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:15:06.449 "is_configured": true, 00:15:06.449 "data_offset": 2048, 00:15:06.449 "data_size": 63488 00:15:06.449 }, 00:15:06.449 { 00:15:06.449 "name": "BaseBdev4", 00:15:06.449 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:15:06.449 "is_configured": true, 00:15:06.449 "data_offset": 2048, 00:15:06.449 "data_size": 63488 00:15:06.449 } 00:15:06.449 ] 00:15:06.449 }' 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.449 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.450 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.450 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.450 "name": "raid_bdev1", 00:15:06.450 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:15:06.450 "strip_size_kb": 64, 00:15:06.450 "state": "online", 00:15:06.450 "raid_level": "raid5f", 00:15:06.450 "superblock": true, 00:15:06.450 "num_base_bdevs": 4, 00:15:06.450 "num_base_bdevs_discovered": 4, 00:15:06.450 "num_base_bdevs_operational": 4, 00:15:06.450 "base_bdevs_list": [ 00:15:06.450 { 00:15:06.450 "name": "spare", 00:15:06.450 "uuid": "1412193a-164a-5e5e-89be-0fd452ad87a3", 00:15:06.450 "is_configured": true, 00:15:06.450 "data_offset": 2048, 00:15:06.450 "data_size": 63488 00:15:06.450 }, 00:15:06.450 { 00:15:06.450 "name": "BaseBdev2", 00:15:06.450 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:15:06.450 "is_configured": true, 00:15:06.450 "data_offset": 2048, 00:15:06.450 "data_size": 63488 00:15:06.450 }, 00:15:06.450 { 00:15:06.450 "name": "BaseBdev3", 00:15:06.450 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:15:06.450 "is_configured": true, 00:15:06.450 "data_offset": 2048, 00:15:06.450 "data_size": 63488 00:15:06.450 }, 00:15:06.450 { 00:15:06.450 "name": "BaseBdev4", 00:15:06.450 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:15:06.450 "is_configured": true, 00:15:06.450 "data_offset": 2048, 00:15:06.450 "data_size": 63488 00:15:06.450 } 00:15:06.450 ] 00:15:06.450 }' 00:15:06.450 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.450 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.021 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:07.021 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.021 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.021 [2024-11-28 16:27:58.511919] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:07.021 [2024-11-28 16:27:58.511974] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:07.021 [2024-11-28 16:27:58.512050] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.021 [2024-11-28 16:27:58.512140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.021 [2024-11-28 16:27:58.512167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:07.021 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.021 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.021 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.021 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:07.021 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.021 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.021 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:07.021 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:07.021 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:07.021 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:07.021 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.021 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:07.021 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:07.021 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:07.021 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:07.021 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:07.021 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:07.021 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:07.021 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:07.021 /dev/nbd0 00:15:07.280 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:07.280 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:07.280 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:07.280 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:07.280 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:07.280 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:07.280 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:07.281 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:07.281 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:07.281 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:07.281 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.281 1+0 records in 00:15:07.281 1+0 records out 00:15:07.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462775 s, 8.9 MB/s 00:15:07.281 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.281 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:07.281 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.281 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:07.281 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:07.281 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.281 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:07.281 16:27:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:07.281 /dev/nbd1 00:15:07.281 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.541 1+0 records in 00:15:07.541 1+0 records out 00:15:07.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432195 s, 9.5 MB/s 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:07.541 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.801 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.801 [2024-11-28 16:27:59.535784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:07.802 [2024-11-28 16:27:59.535853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.802 [2024-11-28 16:27:59.535874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:07.802 [2024-11-28 16:27:59.535884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.802 [2024-11-28 16:27:59.538009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.802 [2024-11-28 16:27:59.538051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:07.802 [2024-11-28 16:27:59.538146] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:07.802 [2024-11-28 16:27:59.538198] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:07.802 [2024-11-28 16:27:59.538318] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:07.802 [2024-11-28 16:27:59.538412] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:07.802 [2024-11-28 16:27:59.538484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:07.802 spare 00:15:07.802 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.802 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:07.802 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.802 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.062 [2024-11-28 16:27:59.638380] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:08.062 [2024-11-28 16:27:59.638410] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:08.062 [2024-11-28 16:27:59.638675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049030 00:15:08.062 [2024-11-28 16:27:59.639137] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:08.062 [2024-11-28 16:27:59.639159] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:15:08.062 [2024-11-28 16:27:59.639299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.062 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.062 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:08.062 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.062 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.062 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.062 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.062 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.062 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.062 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.062 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.062 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.062 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.062 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.062 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.062 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.062 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.062 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.062 "name": "raid_bdev1", 00:15:08.062 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:15:08.062 "strip_size_kb": 64, 00:15:08.062 "state": "online", 00:15:08.062 "raid_level": "raid5f", 00:15:08.062 "superblock": true, 00:15:08.062 "num_base_bdevs": 4, 00:15:08.062 "num_base_bdevs_discovered": 4, 00:15:08.062 "num_base_bdevs_operational": 4, 00:15:08.062 "base_bdevs_list": [ 00:15:08.062 { 00:15:08.062 "name": "spare", 00:15:08.062 "uuid": "1412193a-164a-5e5e-89be-0fd452ad87a3", 00:15:08.062 "is_configured": true, 00:15:08.062 "data_offset": 2048, 00:15:08.062 "data_size": 63488 00:15:08.062 }, 00:15:08.062 { 00:15:08.062 "name": "BaseBdev2", 00:15:08.062 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:15:08.062 "is_configured": true, 00:15:08.062 "data_offset": 2048, 00:15:08.062 "data_size": 63488 00:15:08.062 }, 00:15:08.062 { 00:15:08.062 "name": "BaseBdev3", 00:15:08.062 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:15:08.062 "is_configured": true, 00:15:08.062 "data_offset": 2048, 00:15:08.062 "data_size": 63488 00:15:08.062 }, 00:15:08.062 { 00:15:08.062 "name": "BaseBdev4", 00:15:08.062 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:15:08.062 "is_configured": true, 00:15:08.062 "data_offset": 2048, 00:15:08.062 "data_size": 63488 00:15:08.062 } 00:15:08.062 ] 00:15:08.062 }' 00:15:08.062 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.062 16:27:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.633 "name": "raid_bdev1", 00:15:08.633 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:15:08.633 "strip_size_kb": 64, 00:15:08.633 "state": "online", 00:15:08.633 "raid_level": "raid5f", 00:15:08.633 "superblock": true, 00:15:08.633 "num_base_bdevs": 4, 00:15:08.633 "num_base_bdevs_discovered": 4, 00:15:08.633 "num_base_bdevs_operational": 4, 00:15:08.633 "base_bdevs_list": [ 00:15:08.633 { 00:15:08.633 "name": "spare", 00:15:08.633 "uuid": "1412193a-164a-5e5e-89be-0fd452ad87a3", 00:15:08.633 "is_configured": true, 00:15:08.633 "data_offset": 2048, 00:15:08.633 "data_size": 63488 00:15:08.633 }, 00:15:08.633 { 00:15:08.633 "name": "BaseBdev2", 00:15:08.633 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:15:08.633 "is_configured": true, 00:15:08.633 "data_offset": 2048, 00:15:08.633 "data_size": 63488 00:15:08.633 }, 00:15:08.633 { 00:15:08.633 "name": "BaseBdev3", 00:15:08.633 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:15:08.633 "is_configured": true, 00:15:08.633 "data_offset": 2048, 00:15:08.633 "data_size": 63488 00:15:08.633 }, 00:15:08.633 { 00:15:08.633 "name": "BaseBdev4", 00:15:08.633 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:15:08.633 "is_configured": true, 00:15:08.633 "data_offset": 2048, 00:15:08.633 "data_size": 63488 00:15:08.633 } 00:15:08.633 ] 00:15:08.633 }' 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.633 [2024-11-28 16:28:00.283874] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.633 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.633 "name": "raid_bdev1", 00:15:08.633 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:15:08.633 "strip_size_kb": 64, 00:15:08.633 "state": "online", 00:15:08.633 "raid_level": "raid5f", 00:15:08.633 "superblock": true, 00:15:08.633 "num_base_bdevs": 4, 00:15:08.633 "num_base_bdevs_discovered": 3, 00:15:08.633 "num_base_bdevs_operational": 3, 00:15:08.633 "base_bdevs_list": [ 00:15:08.633 { 00:15:08.633 "name": null, 00:15:08.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.633 "is_configured": false, 00:15:08.633 "data_offset": 0, 00:15:08.633 "data_size": 63488 00:15:08.633 }, 00:15:08.633 { 00:15:08.633 "name": "BaseBdev2", 00:15:08.633 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:15:08.633 "is_configured": true, 00:15:08.633 "data_offset": 2048, 00:15:08.633 "data_size": 63488 00:15:08.633 }, 00:15:08.633 { 00:15:08.634 "name": "BaseBdev3", 00:15:08.634 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:15:08.634 "is_configured": true, 00:15:08.634 "data_offset": 2048, 00:15:08.634 "data_size": 63488 00:15:08.634 }, 00:15:08.634 { 00:15:08.634 "name": "BaseBdev4", 00:15:08.634 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:15:08.634 "is_configured": true, 00:15:08.634 "data_offset": 2048, 00:15:08.634 "data_size": 63488 00:15:08.634 } 00:15:08.634 ] 00:15:08.634 }' 00:15:08.634 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.634 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.202 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:09.202 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.202 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.202 [2024-11-28 16:28:00.715127] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:09.202 [2024-11-28 16:28:00.715338] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:09.202 [2024-11-28 16:28:00.715362] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:09.202 [2024-11-28 16:28:00.715401] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:09.202 [2024-11-28 16:28:00.718603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049100 00:15:09.202 [2024-11-28 16:28:00.720780] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:09.202 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.202 16:28:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:10.142 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.142 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.142 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.142 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.142 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.142 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.142 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.142 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.142 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.142 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.142 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.142 "name": "raid_bdev1", 00:15:10.142 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:15:10.142 "strip_size_kb": 64, 00:15:10.142 "state": "online", 00:15:10.142 "raid_level": "raid5f", 00:15:10.142 "superblock": true, 00:15:10.142 "num_base_bdevs": 4, 00:15:10.142 "num_base_bdevs_discovered": 4, 00:15:10.142 "num_base_bdevs_operational": 4, 00:15:10.142 "process": { 00:15:10.142 "type": "rebuild", 00:15:10.142 "target": "spare", 00:15:10.142 "progress": { 00:15:10.142 "blocks": 19200, 00:15:10.142 "percent": 10 00:15:10.142 } 00:15:10.142 }, 00:15:10.142 "base_bdevs_list": [ 00:15:10.142 { 00:15:10.142 "name": "spare", 00:15:10.142 "uuid": "1412193a-164a-5e5e-89be-0fd452ad87a3", 00:15:10.142 "is_configured": true, 00:15:10.142 "data_offset": 2048, 00:15:10.142 "data_size": 63488 00:15:10.142 }, 00:15:10.142 { 00:15:10.142 "name": "BaseBdev2", 00:15:10.142 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:15:10.142 "is_configured": true, 00:15:10.142 "data_offset": 2048, 00:15:10.142 "data_size": 63488 00:15:10.142 }, 00:15:10.142 { 00:15:10.142 "name": "BaseBdev3", 00:15:10.142 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:15:10.142 "is_configured": true, 00:15:10.142 "data_offset": 2048, 00:15:10.142 "data_size": 63488 00:15:10.142 }, 00:15:10.142 { 00:15:10.142 "name": "BaseBdev4", 00:15:10.142 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:15:10.142 "is_configured": true, 00:15:10.142 "data_offset": 2048, 00:15:10.142 "data_size": 63488 00:15:10.142 } 00:15:10.142 ] 00:15:10.142 }' 00:15:10.142 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.143 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.143 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.143 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.143 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:10.143 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.143 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.143 [2024-11-28 16:28:01.887603] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:10.403 [2024-11-28 16:28:01.925975] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:10.403 [2024-11-28 16:28:01.926050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.403 [2024-11-28 16:28:01.926067] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:10.403 [2024-11-28 16:28:01.926074] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:10.403 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.403 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:10.403 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.403 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.403 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.403 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.403 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.403 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.403 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.403 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.403 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.403 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.403 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.403 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.403 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.403 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.403 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.403 "name": "raid_bdev1", 00:15:10.403 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:15:10.403 "strip_size_kb": 64, 00:15:10.403 "state": "online", 00:15:10.403 "raid_level": "raid5f", 00:15:10.403 "superblock": true, 00:15:10.403 "num_base_bdevs": 4, 00:15:10.403 "num_base_bdevs_discovered": 3, 00:15:10.403 "num_base_bdevs_operational": 3, 00:15:10.403 "base_bdevs_list": [ 00:15:10.403 { 00:15:10.403 "name": null, 00:15:10.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.403 "is_configured": false, 00:15:10.403 "data_offset": 0, 00:15:10.403 "data_size": 63488 00:15:10.403 }, 00:15:10.403 { 00:15:10.403 "name": "BaseBdev2", 00:15:10.403 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:15:10.403 "is_configured": true, 00:15:10.403 "data_offset": 2048, 00:15:10.403 "data_size": 63488 00:15:10.403 }, 00:15:10.403 { 00:15:10.403 "name": "BaseBdev3", 00:15:10.403 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:15:10.403 "is_configured": true, 00:15:10.403 "data_offset": 2048, 00:15:10.403 "data_size": 63488 00:15:10.403 }, 00:15:10.403 { 00:15:10.403 "name": "BaseBdev4", 00:15:10.403 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:15:10.403 "is_configured": true, 00:15:10.403 "data_offset": 2048, 00:15:10.403 "data_size": 63488 00:15:10.403 } 00:15:10.403 ] 00:15:10.403 }' 00:15:10.403 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.403 16:28:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.663 16:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:10.663 16:28:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.663 16:28:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.663 [2024-11-28 16:28:02.378076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:10.663 [2024-11-28 16:28:02.378133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.663 [2024-11-28 16:28:02.378177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:10.663 [2024-11-28 16:28:02.378187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.663 [2024-11-28 16:28:02.378623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.663 [2024-11-28 16:28:02.378650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:10.663 [2024-11-28 16:28:02.378736] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:10.663 [2024-11-28 16:28:02.378755] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:10.663 [2024-11-28 16:28:02.378769] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:10.663 [2024-11-28 16:28:02.378788] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:10.663 [2024-11-28 16:28:02.381952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:15:10.663 [2024-11-28 16:28:02.384122] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:10.663 spare 00:15:10.664 16:28:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.664 16:28:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.046 "name": "raid_bdev1", 00:15:12.046 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:15:12.046 "strip_size_kb": 64, 00:15:12.046 "state": "online", 00:15:12.046 "raid_level": "raid5f", 00:15:12.046 "superblock": true, 00:15:12.046 "num_base_bdevs": 4, 00:15:12.046 "num_base_bdevs_discovered": 4, 00:15:12.046 "num_base_bdevs_operational": 4, 00:15:12.046 "process": { 00:15:12.046 "type": "rebuild", 00:15:12.046 "target": "spare", 00:15:12.046 "progress": { 00:15:12.046 "blocks": 19200, 00:15:12.046 "percent": 10 00:15:12.046 } 00:15:12.046 }, 00:15:12.046 "base_bdevs_list": [ 00:15:12.046 { 00:15:12.046 "name": "spare", 00:15:12.046 "uuid": "1412193a-164a-5e5e-89be-0fd452ad87a3", 00:15:12.046 "is_configured": true, 00:15:12.046 "data_offset": 2048, 00:15:12.046 "data_size": 63488 00:15:12.046 }, 00:15:12.046 { 00:15:12.046 "name": "BaseBdev2", 00:15:12.046 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:15:12.046 "is_configured": true, 00:15:12.046 "data_offset": 2048, 00:15:12.046 "data_size": 63488 00:15:12.046 }, 00:15:12.046 { 00:15:12.046 "name": "BaseBdev3", 00:15:12.046 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:15:12.046 "is_configured": true, 00:15:12.046 "data_offset": 2048, 00:15:12.046 "data_size": 63488 00:15:12.046 }, 00:15:12.046 { 00:15:12.046 "name": "BaseBdev4", 00:15:12.046 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:15:12.046 "is_configured": true, 00:15:12.046 "data_offset": 2048, 00:15:12.046 "data_size": 63488 00:15:12.046 } 00:15:12.046 ] 00:15:12.046 }' 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.046 [2024-11-28 16:28:03.547110] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:12.046 [2024-11-28 16:28:03.589382] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:12.046 [2024-11-28 16:28:03.589438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.046 [2024-11-28 16:28:03.589469] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:12.046 [2024-11-28 16:28:03.589478] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.046 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.047 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.047 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.047 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.047 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.047 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.047 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.047 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.047 "name": "raid_bdev1", 00:15:12.047 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:15:12.047 "strip_size_kb": 64, 00:15:12.047 "state": "online", 00:15:12.047 "raid_level": "raid5f", 00:15:12.047 "superblock": true, 00:15:12.047 "num_base_bdevs": 4, 00:15:12.047 "num_base_bdevs_discovered": 3, 00:15:12.047 "num_base_bdevs_operational": 3, 00:15:12.047 "base_bdevs_list": [ 00:15:12.047 { 00:15:12.047 "name": null, 00:15:12.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.047 "is_configured": false, 00:15:12.047 "data_offset": 0, 00:15:12.047 "data_size": 63488 00:15:12.047 }, 00:15:12.047 { 00:15:12.047 "name": "BaseBdev2", 00:15:12.047 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:15:12.047 "is_configured": true, 00:15:12.047 "data_offset": 2048, 00:15:12.047 "data_size": 63488 00:15:12.047 }, 00:15:12.047 { 00:15:12.047 "name": "BaseBdev3", 00:15:12.047 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:15:12.047 "is_configured": true, 00:15:12.047 "data_offset": 2048, 00:15:12.047 "data_size": 63488 00:15:12.047 }, 00:15:12.047 { 00:15:12.047 "name": "BaseBdev4", 00:15:12.047 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:15:12.047 "is_configured": true, 00:15:12.047 "data_offset": 2048, 00:15:12.047 "data_size": 63488 00:15:12.047 } 00:15:12.047 ] 00:15:12.047 }' 00:15:12.047 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.047 16:28:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.307 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:12.307 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.307 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:12.307 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:12.307 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.308 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.308 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.308 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.308 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.568 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.568 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.568 "name": "raid_bdev1", 00:15:12.568 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:15:12.568 "strip_size_kb": 64, 00:15:12.568 "state": "online", 00:15:12.568 "raid_level": "raid5f", 00:15:12.568 "superblock": true, 00:15:12.568 "num_base_bdevs": 4, 00:15:12.568 "num_base_bdevs_discovered": 3, 00:15:12.568 "num_base_bdevs_operational": 3, 00:15:12.568 "base_bdevs_list": [ 00:15:12.568 { 00:15:12.568 "name": null, 00:15:12.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.568 "is_configured": false, 00:15:12.568 "data_offset": 0, 00:15:12.568 "data_size": 63488 00:15:12.568 }, 00:15:12.568 { 00:15:12.568 "name": "BaseBdev2", 00:15:12.568 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:15:12.568 "is_configured": true, 00:15:12.568 "data_offset": 2048, 00:15:12.568 "data_size": 63488 00:15:12.568 }, 00:15:12.568 { 00:15:12.568 "name": "BaseBdev3", 00:15:12.568 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:15:12.568 "is_configured": true, 00:15:12.568 "data_offset": 2048, 00:15:12.568 "data_size": 63488 00:15:12.568 }, 00:15:12.568 { 00:15:12.568 "name": "BaseBdev4", 00:15:12.568 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:15:12.568 "is_configured": true, 00:15:12.568 "data_offset": 2048, 00:15:12.568 "data_size": 63488 00:15:12.568 } 00:15:12.568 ] 00:15:12.568 }' 00:15:12.568 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.568 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:12.568 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.568 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:12.568 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:12.568 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.568 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.568 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.568 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:12.568 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.568 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.568 [2024-11-28 16:28:04.233189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:12.569 [2024-11-28 16:28:04.233258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.569 [2024-11-28 16:28:04.233277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:12.569 [2024-11-28 16:28:04.233288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.569 [2024-11-28 16:28:04.233711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.569 [2024-11-28 16:28:04.233741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:12.569 [2024-11-28 16:28:04.233808] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:12.569 [2024-11-28 16:28:04.233826] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:12.569 [2024-11-28 16:28:04.233855] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:12.569 [2024-11-28 16:28:04.233866] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:12.569 BaseBdev1 00:15:12.569 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.569 16:28:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:13.508 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:13.508 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.508 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.509 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.509 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.509 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.509 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.509 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.509 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.509 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.509 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.509 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.509 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.509 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.509 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.768 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.768 "name": "raid_bdev1", 00:15:13.768 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:15:13.768 "strip_size_kb": 64, 00:15:13.768 "state": "online", 00:15:13.768 "raid_level": "raid5f", 00:15:13.768 "superblock": true, 00:15:13.768 "num_base_bdevs": 4, 00:15:13.768 "num_base_bdevs_discovered": 3, 00:15:13.768 "num_base_bdevs_operational": 3, 00:15:13.768 "base_bdevs_list": [ 00:15:13.768 { 00:15:13.768 "name": null, 00:15:13.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.768 "is_configured": false, 00:15:13.768 "data_offset": 0, 00:15:13.768 "data_size": 63488 00:15:13.768 }, 00:15:13.768 { 00:15:13.768 "name": "BaseBdev2", 00:15:13.768 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:15:13.768 "is_configured": true, 00:15:13.768 "data_offset": 2048, 00:15:13.768 "data_size": 63488 00:15:13.768 }, 00:15:13.768 { 00:15:13.768 "name": "BaseBdev3", 00:15:13.768 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:15:13.768 "is_configured": true, 00:15:13.768 "data_offset": 2048, 00:15:13.769 "data_size": 63488 00:15:13.769 }, 00:15:13.769 { 00:15:13.769 "name": "BaseBdev4", 00:15:13.769 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:15:13.769 "is_configured": true, 00:15:13.769 "data_offset": 2048, 00:15:13.769 "data_size": 63488 00:15:13.769 } 00:15:13.769 ] 00:15:13.769 }' 00:15:13.769 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.769 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.029 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.029 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.029 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.029 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.029 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.029 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.029 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.029 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.029 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.029 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.029 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.029 "name": "raid_bdev1", 00:15:14.029 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:15:14.029 "strip_size_kb": 64, 00:15:14.029 "state": "online", 00:15:14.029 "raid_level": "raid5f", 00:15:14.029 "superblock": true, 00:15:14.029 "num_base_bdevs": 4, 00:15:14.029 "num_base_bdevs_discovered": 3, 00:15:14.029 "num_base_bdevs_operational": 3, 00:15:14.029 "base_bdevs_list": [ 00:15:14.029 { 00:15:14.029 "name": null, 00:15:14.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.029 "is_configured": false, 00:15:14.029 "data_offset": 0, 00:15:14.029 "data_size": 63488 00:15:14.029 }, 00:15:14.029 { 00:15:14.029 "name": "BaseBdev2", 00:15:14.029 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:15:14.029 "is_configured": true, 00:15:14.029 "data_offset": 2048, 00:15:14.029 "data_size": 63488 00:15:14.029 }, 00:15:14.029 { 00:15:14.029 "name": "BaseBdev3", 00:15:14.029 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:15:14.029 "is_configured": true, 00:15:14.029 "data_offset": 2048, 00:15:14.029 "data_size": 63488 00:15:14.029 }, 00:15:14.029 { 00:15:14.029 "name": "BaseBdev4", 00:15:14.029 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:15:14.029 "is_configured": true, 00:15:14.029 "data_offset": 2048, 00:15:14.029 "data_size": 63488 00:15:14.029 } 00:15:14.029 ] 00:15:14.029 }' 00:15:14.029 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.029 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.029 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.289 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.289 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:14.289 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:14.289 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:14.289 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:14.289 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:14.289 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:14.289 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:14.289 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:14.289 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.289 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.289 [2024-11-28 16:28:05.830555] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.289 [2024-11-28 16:28:05.830713] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:14.289 [2024-11-28 16:28:05.830732] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:14.289 request: 00:15:14.289 { 00:15:14.289 "base_bdev": "BaseBdev1", 00:15:14.289 "raid_bdev": "raid_bdev1", 00:15:14.289 "method": "bdev_raid_add_base_bdev", 00:15:14.290 "req_id": 1 00:15:14.290 } 00:15:14.290 Got JSON-RPC error response 00:15:14.290 response: 00:15:14.290 { 00:15:14.290 "code": -22, 00:15:14.290 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:14.290 } 00:15:14.290 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:14.290 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:14.290 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:14.290 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:14.290 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:14.290 16:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:15.230 16:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:15.230 16:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.230 16:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.230 16:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.230 16:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.230 16:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.230 16:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.230 16:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.230 16:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.230 16:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.230 16:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.230 16:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.230 16:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.230 16:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.230 16:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.230 16:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.230 "name": "raid_bdev1", 00:15:15.230 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:15:15.230 "strip_size_kb": 64, 00:15:15.230 "state": "online", 00:15:15.230 "raid_level": "raid5f", 00:15:15.230 "superblock": true, 00:15:15.230 "num_base_bdevs": 4, 00:15:15.230 "num_base_bdevs_discovered": 3, 00:15:15.230 "num_base_bdevs_operational": 3, 00:15:15.230 "base_bdevs_list": [ 00:15:15.230 { 00:15:15.230 "name": null, 00:15:15.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.230 "is_configured": false, 00:15:15.230 "data_offset": 0, 00:15:15.230 "data_size": 63488 00:15:15.230 }, 00:15:15.230 { 00:15:15.230 "name": "BaseBdev2", 00:15:15.230 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:15:15.230 "is_configured": true, 00:15:15.230 "data_offset": 2048, 00:15:15.230 "data_size": 63488 00:15:15.230 }, 00:15:15.230 { 00:15:15.230 "name": "BaseBdev3", 00:15:15.230 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:15:15.230 "is_configured": true, 00:15:15.230 "data_offset": 2048, 00:15:15.230 "data_size": 63488 00:15:15.230 }, 00:15:15.230 { 00:15:15.230 "name": "BaseBdev4", 00:15:15.230 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:15:15.230 "is_configured": true, 00:15:15.230 "data_offset": 2048, 00:15:15.230 "data_size": 63488 00:15:15.230 } 00:15:15.230 ] 00:15:15.230 }' 00:15:15.230 16:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.230 16:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.491 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:15.491 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.491 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:15.491 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:15.491 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.491 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.491 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.491 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.491 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.751 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.751 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.751 "name": "raid_bdev1", 00:15:15.751 "uuid": "73ae5e0c-b18e-44a4-80da-017a360e49a1", 00:15:15.751 "strip_size_kb": 64, 00:15:15.751 "state": "online", 00:15:15.751 "raid_level": "raid5f", 00:15:15.751 "superblock": true, 00:15:15.751 "num_base_bdevs": 4, 00:15:15.751 "num_base_bdevs_discovered": 3, 00:15:15.751 "num_base_bdevs_operational": 3, 00:15:15.751 "base_bdevs_list": [ 00:15:15.751 { 00:15:15.751 "name": null, 00:15:15.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.751 "is_configured": false, 00:15:15.751 "data_offset": 0, 00:15:15.751 "data_size": 63488 00:15:15.751 }, 00:15:15.751 { 00:15:15.751 "name": "BaseBdev2", 00:15:15.751 "uuid": "c78b2a13-d773-5ff4-9301-897672ad281d", 00:15:15.751 "is_configured": true, 00:15:15.751 "data_offset": 2048, 00:15:15.751 "data_size": 63488 00:15:15.751 }, 00:15:15.751 { 00:15:15.751 "name": "BaseBdev3", 00:15:15.751 "uuid": "37ea5cf8-ddd3-56e1-93ac-6a9e8ebfb708", 00:15:15.751 "is_configured": true, 00:15:15.751 "data_offset": 2048, 00:15:15.751 "data_size": 63488 00:15:15.751 }, 00:15:15.751 { 00:15:15.751 "name": "BaseBdev4", 00:15:15.751 "uuid": "0f447188-6835-5b47-93b6-37037264ffcd", 00:15:15.751 "is_configured": true, 00:15:15.751 "data_offset": 2048, 00:15:15.751 "data_size": 63488 00:15:15.751 } 00:15:15.751 ] 00:15:15.751 }' 00:15:15.751 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.751 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:15.751 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.751 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:15.751 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 95520 00:15:15.751 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 95520 ']' 00:15:15.751 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 95520 00:15:15.751 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:15.751 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:15.751 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95520 00:15:15.751 killing process with pid 95520 00:15:15.751 Received shutdown signal, test time was about 60.000000 seconds 00:15:15.751 00:15:15.751 Latency(us) 00:15:15.751 [2024-11-28T16:28:07.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.751 [2024-11-28T16:28:07.522Z] =================================================================================================================== 00:15:15.751 [2024-11-28T16:28:07.522Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:15.751 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:15.751 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:15.751 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95520' 00:15:15.751 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 95520 00:15:15.751 [2024-11-28 16:28:07.444231] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:15.751 [2024-11-28 16:28:07.444346] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:15.751 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 95520 00:15:15.751 [2024-11-28 16:28:07.444419] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:15.751 [2024-11-28 16:28:07.444429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:15.751 [2024-11-28 16:28:07.496047] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:16.011 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:16.011 00:15:16.011 real 0m24.933s 00:15:16.011 user 0m31.708s 00:15:16.011 sys 0m2.923s 00:15:16.011 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:16.011 16:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.011 ************************************ 00:15:16.011 END TEST raid5f_rebuild_test_sb 00:15:16.011 ************************************ 00:15:16.272 16:28:07 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:15:16.272 16:28:07 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:15:16.272 16:28:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:16.272 16:28:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:16.272 16:28:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:16.272 ************************************ 00:15:16.272 START TEST raid_state_function_test_sb_4k 00:15:16.272 ************************************ 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=96314 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:16.272 Process raid pid: 96314 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 96314' 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 96314 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96314 ']' 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:16.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:16.272 16:28:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:16.272 [2024-11-28 16:28:07.905616] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:16.272 [2024-11-28 16:28:07.905751] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.532 [2024-11-28 16:28:08.066115] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.532 [2024-11-28 16:28:08.111539] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.532 [2024-11-28 16:28:08.154004] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:16.533 [2024-11-28 16:28:08.154045] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.103 [2024-11-28 16:28:08.723535] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:17.103 [2024-11-28 16:28:08.723589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:17.103 [2024-11-28 16:28:08.723601] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:17.103 [2024-11-28 16:28:08.723610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.103 "name": "Existed_Raid", 00:15:17.103 "uuid": "4d094489-e853-42b0-8e24-628bef9663c2", 00:15:17.103 "strip_size_kb": 0, 00:15:17.103 "state": "configuring", 00:15:17.103 "raid_level": "raid1", 00:15:17.103 "superblock": true, 00:15:17.103 "num_base_bdevs": 2, 00:15:17.103 "num_base_bdevs_discovered": 0, 00:15:17.103 "num_base_bdevs_operational": 2, 00:15:17.103 "base_bdevs_list": [ 00:15:17.103 { 00:15:17.103 "name": "BaseBdev1", 00:15:17.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.103 "is_configured": false, 00:15:17.103 "data_offset": 0, 00:15:17.103 "data_size": 0 00:15:17.103 }, 00:15:17.103 { 00:15:17.103 "name": "BaseBdev2", 00:15:17.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.103 "is_configured": false, 00:15:17.103 "data_offset": 0, 00:15:17.103 "data_size": 0 00:15:17.103 } 00:15:17.103 ] 00:15:17.103 }' 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.103 16:28:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.673 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:17.673 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.673 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.673 [2024-11-28 16:28:09.194621] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:17.673 [2024-11-28 16:28:09.194667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:15:17.673 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.673 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:17.673 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.673 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.673 [2024-11-28 16:28:09.206641] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:17.673 [2024-11-28 16:28:09.206681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:17.673 [2024-11-28 16:28:09.206689] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:17.673 [2024-11-28 16:28:09.206698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:17.673 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.673 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:15:17.673 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.673 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.673 [2024-11-28 16:28:09.227418] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:17.673 BaseBdev1 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.674 [ 00:15:17.674 { 00:15:17.674 "name": "BaseBdev1", 00:15:17.674 "aliases": [ 00:15:17.674 "96de7c33-5ad4-43aa-9dae-d042bdab91d6" 00:15:17.674 ], 00:15:17.674 "product_name": "Malloc disk", 00:15:17.674 "block_size": 4096, 00:15:17.674 "num_blocks": 8192, 00:15:17.674 "uuid": "96de7c33-5ad4-43aa-9dae-d042bdab91d6", 00:15:17.674 "assigned_rate_limits": { 00:15:17.674 "rw_ios_per_sec": 0, 00:15:17.674 "rw_mbytes_per_sec": 0, 00:15:17.674 "r_mbytes_per_sec": 0, 00:15:17.674 "w_mbytes_per_sec": 0 00:15:17.674 }, 00:15:17.674 "claimed": true, 00:15:17.674 "claim_type": "exclusive_write", 00:15:17.674 "zoned": false, 00:15:17.674 "supported_io_types": { 00:15:17.674 "read": true, 00:15:17.674 "write": true, 00:15:17.674 "unmap": true, 00:15:17.674 "flush": true, 00:15:17.674 "reset": true, 00:15:17.674 "nvme_admin": false, 00:15:17.674 "nvme_io": false, 00:15:17.674 "nvme_io_md": false, 00:15:17.674 "write_zeroes": true, 00:15:17.674 "zcopy": true, 00:15:17.674 "get_zone_info": false, 00:15:17.674 "zone_management": false, 00:15:17.674 "zone_append": false, 00:15:17.674 "compare": false, 00:15:17.674 "compare_and_write": false, 00:15:17.674 "abort": true, 00:15:17.674 "seek_hole": false, 00:15:17.674 "seek_data": false, 00:15:17.674 "copy": true, 00:15:17.674 "nvme_iov_md": false 00:15:17.674 }, 00:15:17.674 "memory_domains": [ 00:15:17.674 { 00:15:17.674 "dma_device_id": "system", 00:15:17.674 "dma_device_type": 1 00:15:17.674 }, 00:15:17.674 { 00:15:17.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.674 "dma_device_type": 2 00:15:17.674 } 00:15:17.674 ], 00:15:17.674 "driver_specific": {} 00:15:17.674 } 00:15:17.674 ] 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.674 "name": "Existed_Raid", 00:15:17.674 "uuid": "e18af135-8c89-421c-a74e-5e06214da41f", 00:15:17.674 "strip_size_kb": 0, 00:15:17.674 "state": "configuring", 00:15:17.674 "raid_level": "raid1", 00:15:17.674 "superblock": true, 00:15:17.674 "num_base_bdevs": 2, 00:15:17.674 "num_base_bdevs_discovered": 1, 00:15:17.674 "num_base_bdevs_operational": 2, 00:15:17.674 "base_bdevs_list": [ 00:15:17.674 { 00:15:17.674 "name": "BaseBdev1", 00:15:17.674 "uuid": "96de7c33-5ad4-43aa-9dae-d042bdab91d6", 00:15:17.674 "is_configured": true, 00:15:17.674 "data_offset": 256, 00:15:17.674 "data_size": 7936 00:15:17.674 }, 00:15:17.674 { 00:15:17.674 "name": "BaseBdev2", 00:15:17.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.674 "is_configured": false, 00:15:17.674 "data_offset": 0, 00:15:17.674 "data_size": 0 00:15:17.674 } 00:15:17.674 ] 00:15:17.674 }' 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.674 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.245 [2024-11-28 16:28:09.714588] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:18.245 [2024-11-28 16:28:09.714633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.245 [2024-11-28 16:28:09.726607] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:18.245 [2024-11-28 16:28:09.728389] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:18.245 [2024-11-28 16:28:09.728432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.245 "name": "Existed_Raid", 00:15:18.245 "uuid": "27bf34d9-4805-415e-a8fd-c5865bebfb52", 00:15:18.245 "strip_size_kb": 0, 00:15:18.245 "state": "configuring", 00:15:18.245 "raid_level": "raid1", 00:15:18.245 "superblock": true, 00:15:18.245 "num_base_bdevs": 2, 00:15:18.245 "num_base_bdevs_discovered": 1, 00:15:18.245 "num_base_bdevs_operational": 2, 00:15:18.245 "base_bdevs_list": [ 00:15:18.245 { 00:15:18.245 "name": "BaseBdev1", 00:15:18.245 "uuid": "96de7c33-5ad4-43aa-9dae-d042bdab91d6", 00:15:18.245 "is_configured": true, 00:15:18.245 "data_offset": 256, 00:15:18.245 "data_size": 7936 00:15:18.245 }, 00:15:18.245 { 00:15:18.245 "name": "BaseBdev2", 00:15:18.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.245 "is_configured": false, 00:15:18.245 "data_offset": 0, 00:15:18.245 "data_size": 0 00:15:18.245 } 00:15:18.245 ] 00:15:18.245 }' 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.245 16:28:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.506 [2024-11-28 16:28:10.237648] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:18.506 [2024-11-28 16:28:10.238273] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:18.506 [2024-11-28 16:28:10.238344] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:18.506 BaseBdev2 00:15:18.506 [2024-11-28 16:28:10.239347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.506 [2024-11-28 16:28:10.239888] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:18.506 [2024-11-28 16:28:10.239993] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:18.506 [2024-11-28 16:28:10.240390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.506 [ 00:15:18.506 { 00:15:18.506 "name": "BaseBdev2", 00:15:18.506 "aliases": [ 00:15:18.506 "80f9cdcd-3c06-4bde-a39e-c90df9ff9938" 00:15:18.506 ], 00:15:18.506 "product_name": "Malloc disk", 00:15:18.506 "block_size": 4096, 00:15:18.506 "num_blocks": 8192, 00:15:18.506 "uuid": "80f9cdcd-3c06-4bde-a39e-c90df9ff9938", 00:15:18.506 "assigned_rate_limits": { 00:15:18.506 "rw_ios_per_sec": 0, 00:15:18.506 "rw_mbytes_per_sec": 0, 00:15:18.506 "r_mbytes_per_sec": 0, 00:15:18.506 "w_mbytes_per_sec": 0 00:15:18.506 }, 00:15:18.506 "claimed": true, 00:15:18.506 "claim_type": "exclusive_write", 00:15:18.506 "zoned": false, 00:15:18.506 "supported_io_types": { 00:15:18.506 "read": true, 00:15:18.506 "write": true, 00:15:18.506 "unmap": true, 00:15:18.506 "flush": true, 00:15:18.506 "reset": true, 00:15:18.506 "nvme_admin": false, 00:15:18.506 "nvme_io": false, 00:15:18.506 "nvme_io_md": false, 00:15:18.506 "write_zeroes": true, 00:15:18.506 "zcopy": true, 00:15:18.506 "get_zone_info": false, 00:15:18.506 "zone_management": false, 00:15:18.506 "zone_append": false, 00:15:18.506 "compare": false, 00:15:18.506 "compare_and_write": false, 00:15:18.506 "abort": true, 00:15:18.506 "seek_hole": false, 00:15:18.506 "seek_data": false, 00:15:18.506 "copy": true, 00:15:18.506 "nvme_iov_md": false 00:15:18.506 }, 00:15:18.506 "memory_domains": [ 00:15:18.506 { 00:15:18.506 "dma_device_id": "system", 00:15:18.506 "dma_device_type": 1 00:15:18.506 }, 00:15:18.506 { 00:15:18.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.506 "dma_device_type": 2 00:15:18.506 } 00:15:18.506 ], 00:15:18.506 "driver_specific": {} 00:15:18.506 } 00:15:18.506 ] 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.506 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.776 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.776 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:18.776 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.776 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.776 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.776 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.776 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.776 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.776 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.776 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.776 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.776 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.776 "name": "Existed_Raid", 00:15:18.776 "uuid": "27bf34d9-4805-415e-a8fd-c5865bebfb52", 00:15:18.776 "strip_size_kb": 0, 00:15:18.776 "state": "online", 00:15:18.776 "raid_level": "raid1", 00:15:18.776 "superblock": true, 00:15:18.776 "num_base_bdevs": 2, 00:15:18.776 "num_base_bdevs_discovered": 2, 00:15:18.776 "num_base_bdevs_operational": 2, 00:15:18.776 "base_bdevs_list": [ 00:15:18.776 { 00:15:18.776 "name": "BaseBdev1", 00:15:18.776 "uuid": "96de7c33-5ad4-43aa-9dae-d042bdab91d6", 00:15:18.776 "is_configured": true, 00:15:18.776 "data_offset": 256, 00:15:18.776 "data_size": 7936 00:15:18.776 }, 00:15:18.776 { 00:15:18.776 "name": "BaseBdev2", 00:15:18.776 "uuid": "80f9cdcd-3c06-4bde-a39e-c90df9ff9938", 00:15:18.776 "is_configured": true, 00:15:18.776 "data_offset": 256, 00:15:18.776 "data_size": 7936 00:15:18.776 } 00:15:18.776 ] 00:15:18.776 }' 00:15:18.776 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.776 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.037 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:19.037 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:19.037 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:19.037 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:19.037 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:19.037 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:19.037 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:19.037 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.037 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:19.037 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.037 [2024-11-28 16:28:10.744978] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.037 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.037 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:19.037 "name": "Existed_Raid", 00:15:19.037 "aliases": [ 00:15:19.037 "27bf34d9-4805-415e-a8fd-c5865bebfb52" 00:15:19.037 ], 00:15:19.037 "product_name": "Raid Volume", 00:15:19.037 "block_size": 4096, 00:15:19.037 "num_blocks": 7936, 00:15:19.037 "uuid": "27bf34d9-4805-415e-a8fd-c5865bebfb52", 00:15:19.037 "assigned_rate_limits": { 00:15:19.037 "rw_ios_per_sec": 0, 00:15:19.037 "rw_mbytes_per_sec": 0, 00:15:19.037 "r_mbytes_per_sec": 0, 00:15:19.037 "w_mbytes_per_sec": 0 00:15:19.037 }, 00:15:19.037 "claimed": false, 00:15:19.037 "zoned": false, 00:15:19.037 "supported_io_types": { 00:15:19.037 "read": true, 00:15:19.037 "write": true, 00:15:19.037 "unmap": false, 00:15:19.037 "flush": false, 00:15:19.037 "reset": true, 00:15:19.038 "nvme_admin": false, 00:15:19.038 "nvme_io": false, 00:15:19.038 "nvme_io_md": false, 00:15:19.038 "write_zeroes": true, 00:15:19.038 "zcopy": false, 00:15:19.038 "get_zone_info": false, 00:15:19.038 "zone_management": false, 00:15:19.038 "zone_append": false, 00:15:19.038 "compare": false, 00:15:19.038 "compare_and_write": false, 00:15:19.038 "abort": false, 00:15:19.038 "seek_hole": false, 00:15:19.038 "seek_data": false, 00:15:19.038 "copy": false, 00:15:19.038 "nvme_iov_md": false 00:15:19.038 }, 00:15:19.038 "memory_domains": [ 00:15:19.038 { 00:15:19.038 "dma_device_id": "system", 00:15:19.038 "dma_device_type": 1 00:15:19.038 }, 00:15:19.038 { 00:15:19.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.038 "dma_device_type": 2 00:15:19.038 }, 00:15:19.038 { 00:15:19.038 "dma_device_id": "system", 00:15:19.038 "dma_device_type": 1 00:15:19.038 }, 00:15:19.038 { 00:15:19.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.038 "dma_device_type": 2 00:15:19.038 } 00:15:19.038 ], 00:15:19.038 "driver_specific": { 00:15:19.038 "raid": { 00:15:19.038 "uuid": "27bf34d9-4805-415e-a8fd-c5865bebfb52", 00:15:19.038 "strip_size_kb": 0, 00:15:19.038 "state": "online", 00:15:19.038 "raid_level": "raid1", 00:15:19.038 "superblock": true, 00:15:19.038 "num_base_bdevs": 2, 00:15:19.038 "num_base_bdevs_discovered": 2, 00:15:19.038 "num_base_bdevs_operational": 2, 00:15:19.038 "base_bdevs_list": [ 00:15:19.038 { 00:15:19.038 "name": "BaseBdev1", 00:15:19.038 "uuid": "96de7c33-5ad4-43aa-9dae-d042bdab91d6", 00:15:19.038 "is_configured": true, 00:15:19.038 "data_offset": 256, 00:15:19.038 "data_size": 7936 00:15:19.038 }, 00:15:19.038 { 00:15:19.038 "name": "BaseBdev2", 00:15:19.038 "uuid": "80f9cdcd-3c06-4bde-a39e-c90df9ff9938", 00:15:19.038 "is_configured": true, 00:15:19.038 "data_offset": 256, 00:15:19.038 "data_size": 7936 00:15:19.038 } 00:15:19.038 ] 00:15:19.038 } 00:15:19.038 } 00:15:19.038 }' 00:15:19.038 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:19.299 BaseBdev2' 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.299 [2024-11-28 16:28:10.984358] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:19.299 16:28:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.299 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.299 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.299 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.299 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.299 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.299 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.299 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.299 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.299 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.299 "name": "Existed_Raid", 00:15:19.299 "uuid": "27bf34d9-4805-415e-a8fd-c5865bebfb52", 00:15:19.299 "strip_size_kb": 0, 00:15:19.299 "state": "online", 00:15:19.299 "raid_level": "raid1", 00:15:19.299 "superblock": true, 00:15:19.299 "num_base_bdevs": 2, 00:15:19.299 "num_base_bdevs_discovered": 1, 00:15:19.299 "num_base_bdevs_operational": 1, 00:15:19.299 "base_bdevs_list": [ 00:15:19.299 { 00:15:19.299 "name": null, 00:15:19.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.299 "is_configured": false, 00:15:19.299 "data_offset": 0, 00:15:19.299 "data_size": 7936 00:15:19.299 }, 00:15:19.299 { 00:15:19.299 "name": "BaseBdev2", 00:15:19.299 "uuid": "80f9cdcd-3c06-4bde-a39e-c90df9ff9938", 00:15:19.299 "is_configured": true, 00:15:19.299 "data_offset": 256, 00:15:19.299 "data_size": 7936 00:15:19.299 } 00:15:19.299 ] 00:15:19.299 }' 00:15:19.299 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.299 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.870 [2024-11-28 16:28:11.490982] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:19.870 [2024-11-28 16:28:11.491124] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:19.870 [2024-11-28 16:28:11.502798] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:19.870 [2024-11-28 16:28:11.502946] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:19.870 [2024-11-28 16:28:11.502995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 96314 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96314 ']' 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96314 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96314 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:19.870 killing process with pid 96314 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96314' 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96314 00:15:19.870 [2024-11-28 16:28:11.589616] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:19.870 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96314 00:15:19.870 [2024-11-28 16:28:11.590557] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:20.130 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:15:20.130 00:15:20.130 real 0m4.031s 00:15:20.131 user 0m6.342s 00:15:20.131 sys 0m0.864s 00:15:20.131 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:20.131 16:28:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:20.131 ************************************ 00:15:20.131 END TEST raid_state_function_test_sb_4k 00:15:20.131 ************************************ 00:15:20.391 16:28:11 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:15:20.391 16:28:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:20.391 16:28:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:20.391 16:28:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:20.391 ************************************ 00:15:20.391 START TEST raid_superblock_test_4k 00:15:20.391 ************************************ 00:15:20.391 16:28:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:15:20.391 16:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:20.391 16:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:20.391 16:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:20.391 16:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:20.392 16:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:20.392 16:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:20.392 16:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:20.392 16:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:20.392 16:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:20.392 16:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:20.392 16:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:20.392 16:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:20.392 16:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:20.392 16:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:20.392 16:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:20.392 16:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=96551 00:15:20.392 16:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:20.392 16:28:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 96551 00:15:20.392 16:28:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 96551 ']' 00:15:20.392 16:28:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.392 16:28:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:20.392 16:28:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.392 16:28:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:20.392 16:28:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:20.392 [2024-11-28 16:28:12.016893] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:20.392 [2024-11-28 16:28:12.017046] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96551 ] 00:15:20.652 [2024-11-28 16:28:12.175614] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.652 [2024-11-28 16:28:12.222734] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.652 [2024-11-28 16:28:12.265771] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.652 [2024-11-28 16:28:12.265806] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.223 16:28:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:21.223 16:28:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:15:21.223 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:21.223 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:21.223 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:21.223 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:21.223 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:21.223 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:21.223 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:21.223 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:21.223 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:15:21.223 16:28:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.223 16:28:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:21.223 malloc1 00:15:21.223 16:28:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.223 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:21.223 16:28:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.223 16:28:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:21.223 [2024-11-28 16:28:12.848104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:21.223 [2024-11-28 16:28:12.848240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.223 [2024-11-28 16:28:12.848282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:21.223 [2024-11-28 16:28:12.848317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.223 [2024-11-28 16:28:12.850504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.223 [2024-11-28 16:28:12.850577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:21.223 pt1 00:15:21.223 16:28:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.223 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:21.223 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:21.223 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:21.223 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:21.224 malloc2 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:21.224 [2024-11-28 16:28:12.897427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:21.224 [2024-11-28 16:28:12.897539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.224 [2024-11-28 16:28:12.897576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:21.224 [2024-11-28 16:28:12.897601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.224 [2024-11-28 16:28:12.902565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.224 [2024-11-28 16:28:12.902645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:21.224 pt2 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:21.224 [2024-11-28 16:28:12.910943] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:21.224 [2024-11-28 16:28:12.913948] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:21.224 [2024-11-28 16:28:12.914237] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:21.224 [2024-11-28 16:28:12.914278] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:21.224 [2024-11-28 16:28:12.914661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:21.224 [2024-11-28 16:28:12.914878] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:21.224 [2024-11-28 16:28:12.914896] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:21.224 [2024-11-28 16:28:12.915120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.224 "name": "raid_bdev1", 00:15:21.224 "uuid": "ab593313-0bc3-4b99-aaf9-56f2195e0192", 00:15:21.224 "strip_size_kb": 0, 00:15:21.224 "state": "online", 00:15:21.224 "raid_level": "raid1", 00:15:21.224 "superblock": true, 00:15:21.224 "num_base_bdevs": 2, 00:15:21.224 "num_base_bdevs_discovered": 2, 00:15:21.224 "num_base_bdevs_operational": 2, 00:15:21.224 "base_bdevs_list": [ 00:15:21.224 { 00:15:21.224 "name": "pt1", 00:15:21.224 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:21.224 "is_configured": true, 00:15:21.224 "data_offset": 256, 00:15:21.224 "data_size": 7936 00:15:21.224 }, 00:15:21.224 { 00:15:21.224 "name": "pt2", 00:15:21.224 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:21.224 "is_configured": true, 00:15:21.224 "data_offset": 256, 00:15:21.224 "data_size": 7936 00:15:21.224 } 00:15:21.224 ] 00:15:21.224 }' 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.224 16:28:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:21.795 [2024-11-28 16:28:13.366512] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:21.795 "name": "raid_bdev1", 00:15:21.795 "aliases": [ 00:15:21.795 "ab593313-0bc3-4b99-aaf9-56f2195e0192" 00:15:21.795 ], 00:15:21.795 "product_name": "Raid Volume", 00:15:21.795 "block_size": 4096, 00:15:21.795 "num_blocks": 7936, 00:15:21.795 "uuid": "ab593313-0bc3-4b99-aaf9-56f2195e0192", 00:15:21.795 "assigned_rate_limits": { 00:15:21.795 "rw_ios_per_sec": 0, 00:15:21.795 "rw_mbytes_per_sec": 0, 00:15:21.795 "r_mbytes_per_sec": 0, 00:15:21.795 "w_mbytes_per_sec": 0 00:15:21.795 }, 00:15:21.795 "claimed": false, 00:15:21.795 "zoned": false, 00:15:21.795 "supported_io_types": { 00:15:21.795 "read": true, 00:15:21.795 "write": true, 00:15:21.795 "unmap": false, 00:15:21.795 "flush": false, 00:15:21.795 "reset": true, 00:15:21.795 "nvme_admin": false, 00:15:21.795 "nvme_io": false, 00:15:21.795 "nvme_io_md": false, 00:15:21.795 "write_zeroes": true, 00:15:21.795 "zcopy": false, 00:15:21.795 "get_zone_info": false, 00:15:21.795 "zone_management": false, 00:15:21.795 "zone_append": false, 00:15:21.795 "compare": false, 00:15:21.795 "compare_and_write": false, 00:15:21.795 "abort": false, 00:15:21.795 "seek_hole": false, 00:15:21.795 "seek_data": false, 00:15:21.795 "copy": false, 00:15:21.795 "nvme_iov_md": false 00:15:21.795 }, 00:15:21.795 "memory_domains": [ 00:15:21.795 { 00:15:21.795 "dma_device_id": "system", 00:15:21.795 "dma_device_type": 1 00:15:21.795 }, 00:15:21.795 { 00:15:21.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.795 "dma_device_type": 2 00:15:21.795 }, 00:15:21.795 { 00:15:21.795 "dma_device_id": "system", 00:15:21.795 "dma_device_type": 1 00:15:21.795 }, 00:15:21.795 { 00:15:21.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.795 "dma_device_type": 2 00:15:21.795 } 00:15:21.795 ], 00:15:21.795 "driver_specific": { 00:15:21.795 "raid": { 00:15:21.795 "uuid": "ab593313-0bc3-4b99-aaf9-56f2195e0192", 00:15:21.795 "strip_size_kb": 0, 00:15:21.795 "state": "online", 00:15:21.795 "raid_level": "raid1", 00:15:21.795 "superblock": true, 00:15:21.795 "num_base_bdevs": 2, 00:15:21.795 "num_base_bdevs_discovered": 2, 00:15:21.795 "num_base_bdevs_operational": 2, 00:15:21.795 "base_bdevs_list": [ 00:15:21.795 { 00:15:21.795 "name": "pt1", 00:15:21.795 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:21.795 "is_configured": true, 00:15:21.795 "data_offset": 256, 00:15:21.795 "data_size": 7936 00:15:21.795 }, 00:15:21.795 { 00:15:21.795 "name": "pt2", 00:15:21.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:21.795 "is_configured": true, 00:15:21.795 "data_offset": 256, 00:15:21.795 "data_size": 7936 00:15:21.795 } 00:15:21.795 ] 00:15:21.795 } 00:15:21.795 } 00:15:21.795 }' 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:21.795 pt2' 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.795 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.056 [2024-11-28 16:28:13.602062] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ab593313-0bc3-4b99-aaf9-56f2195e0192 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z ab593313-0bc3-4b99-aaf9-56f2195e0192 ']' 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.056 [2024-11-28 16:28:13.641772] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:22.056 [2024-11-28 16:28:13.641797] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.056 [2024-11-28 16:28:13.641878] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.056 [2024-11-28 16:28:13.641947] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.056 [2024-11-28 16:28:13.641957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.056 [2024-11-28 16:28:13.769580] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:22.056 [2024-11-28 16:28:13.771460] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:22.056 [2024-11-28 16:28:13.771532] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:22.056 [2024-11-28 16:28:13.771587] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:22.056 [2024-11-28 16:28:13.771605] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:22.056 [2024-11-28 16:28:13.771613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:15:22.056 request: 00:15:22.056 { 00:15:22.056 "name": "raid_bdev1", 00:15:22.056 "raid_level": "raid1", 00:15:22.056 "base_bdevs": [ 00:15:22.056 "malloc1", 00:15:22.056 "malloc2" 00:15:22.056 ], 00:15:22.056 "superblock": false, 00:15:22.056 "method": "bdev_raid_create", 00:15:22.056 "req_id": 1 00:15:22.056 } 00:15:22.056 Got JSON-RPC error response 00:15:22.056 response: 00:15:22.056 { 00:15:22.056 "code": -17, 00:15:22.056 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:22.056 } 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:22.056 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:22.057 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.057 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:22.057 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.057 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.057 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.317 [2024-11-28 16:28:13.837444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:22.317 [2024-11-28 16:28:13.837548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.317 [2024-11-28 16:28:13.837581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:22.317 [2024-11-28 16:28:13.837607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.317 [2024-11-28 16:28:13.839667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.317 [2024-11-28 16:28:13.839751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:22.317 [2024-11-28 16:28:13.839834] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:22.317 [2024-11-28 16:28:13.839906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:22.317 pt1 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.317 "name": "raid_bdev1", 00:15:22.317 "uuid": "ab593313-0bc3-4b99-aaf9-56f2195e0192", 00:15:22.317 "strip_size_kb": 0, 00:15:22.317 "state": "configuring", 00:15:22.317 "raid_level": "raid1", 00:15:22.317 "superblock": true, 00:15:22.317 "num_base_bdevs": 2, 00:15:22.317 "num_base_bdevs_discovered": 1, 00:15:22.317 "num_base_bdevs_operational": 2, 00:15:22.317 "base_bdevs_list": [ 00:15:22.317 { 00:15:22.317 "name": "pt1", 00:15:22.317 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:22.317 "is_configured": true, 00:15:22.317 "data_offset": 256, 00:15:22.317 "data_size": 7936 00:15:22.317 }, 00:15:22.317 { 00:15:22.317 "name": null, 00:15:22.317 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:22.317 "is_configured": false, 00:15:22.317 "data_offset": 256, 00:15:22.317 "data_size": 7936 00:15:22.317 } 00:15:22.317 ] 00:15:22.317 }' 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.317 16:28:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.580 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:22.580 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:22.580 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:22.580 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:22.580 16:28:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.580 16:28:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.580 [2024-11-28 16:28:14.324617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:22.580 [2024-11-28 16:28:14.324737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.580 [2024-11-28 16:28:14.324777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:22.580 [2024-11-28 16:28:14.324804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.580 [2024-11-28 16:28:14.325214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.580 [2024-11-28 16:28:14.325273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:22.580 [2024-11-28 16:28:14.325367] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:22.580 [2024-11-28 16:28:14.325413] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:22.580 [2024-11-28 16:28:14.325550] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:22.580 [2024-11-28 16:28:14.325589] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:22.580 [2024-11-28 16:28:14.325825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:22.580 [2024-11-28 16:28:14.325992] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:22.580 [2024-11-28 16:28:14.326051] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:15:22.580 [2024-11-28 16:28:14.326189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.580 pt2 00:15:22.580 16:28:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.580 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:22.580 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:22.580 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:22.580 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.580 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.580 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.580 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.580 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:22.581 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.581 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.581 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.581 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.581 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.581 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.581 16:28:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.581 16:28:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.852 16:28:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.852 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.852 "name": "raid_bdev1", 00:15:22.852 "uuid": "ab593313-0bc3-4b99-aaf9-56f2195e0192", 00:15:22.852 "strip_size_kb": 0, 00:15:22.852 "state": "online", 00:15:22.852 "raid_level": "raid1", 00:15:22.852 "superblock": true, 00:15:22.852 "num_base_bdevs": 2, 00:15:22.852 "num_base_bdevs_discovered": 2, 00:15:22.852 "num_base_bdevs_operational": 2, 00:15:22.852 "base_bdevs_list": [ 00:15:22.852 { 00:15:22.852 "name": "pt1", 00:15:22.852 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:22.852 "is_configured": true, 00:15:22.852 "data_offset": 256, 00:15:22.852 "data_size": 7936 00:15:22.852 }, 00:15:22.852 { 00:15:22.852 "name": "pt2", 00:15:22.852 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:22.852 "is_configured": true, 00:15:22.852 "data_offset": 256, 00:15:22.852 "data_size": 7936 00:15:22.852 } 00:15:22.852 ] 00:15:22.852 }' 00:15:22.852 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.852 16:28:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.121 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:23.121 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:23.121 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:23.121 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:23.121 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:23.121 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:23.121 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:23.121 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:23.121 16:28:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.121 16:28:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.121 [2024-11-28 16:28:14.780171] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.122 16:28:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.122 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:23.122 "name": "raid_bdev1", 00:15:23.122 "aliases": [ 00:15:23.122 "ab593313-0bc3-4b99-aaf9-56f2195e0192" 00:15:23.122 ], 00:15:23.122 "product_name": "Raid Volume", 00:15:23.122 "block_size": 4096, 00:15:23.122 "num_blocks": 7936, 00:15:23.122 "uuid": "ab593313-0bc3-4b99-aaf9-56f2195e0192", 00:15:23.122 "assigned_rate_limits": { 00:15:23.122 "rw_ios_per_sec": 0, 00:15:23.122 "rw_mbytes_per_sec": 0, 00:15:23.122 "r_mbytes_per_sec": 0, 00:15:23.122 "w_mbytes_per_sec": 0 00:15:23.122 }, 00:15:23.122 "claimed": false, 00:15:23.122 "zoned": false, 00:15:23.122 "supported_io_types": { 00:15:23.122 "read": true, 00:15:23.122 "write": true, 00:15:23.122 "unmap": false, 00:15:23.122 "flush": false, 00:15:23.122 "reset": true, 00:15:23.122 "nvme_admin": false, 00:15:23.122 "nvme_io": false, 00:15:23.122 "nvme_io_md": false, 00:15:23.122 "write_zeroes": true, 00:15:23.122 "zcopy": false, 00:15:23.122 "get_zone_info": false, 00:15:23.122 "zone_management": false, 00:15:23.122 "zone_append": false, 00:15:23.122 "compare": false, 00:15:23.122 "compare_and_write": false, 00:15:23.122 "abort": false, 00:15:23.122 "seek_hole": false, 00:15:23.122 "seek_data": false, 00:15:23.122 "copy": false, 00:15:23.122 "nvme_iov_md": false 00:15:23.122 }, 00:15:23.122 "memory_domains": [ 00:15:23.122 { 00:15:23.122 "dma_device_id": "system", 00:15:23.122 "dma_device_type": 1 00:15:23.122 }, 00:15:23.122 { 00:15:23.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.122 "dma_device_type": 2 00:15:23.122 }, 00:15:23.122 { 00:15:23.122 "dma_device_id": "system", 00:15:23.122 "dma_device_type": 1 00:15:23.122 }, 00:15:23.122 { 00:15:23.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.122 "dma_device_type": 2 00:15:23.122 } 00:15:23.122 ], 00:15:23.122 "driver_specific": { 00:15:23.122 "raid": { 00:15:23.122 "uuid": "ab593313-0bc3-4b99-aaf9-56f2195e0192", 00:15:23.122 "strip_size_kb": 0, 00:15:23.122 "state": "online", 00:15:23.122 "raid_level": "raid1", 00:15:23.122 "superblock": true, 00:15:23.122 "num_base_bdevs": 2, 00:15:23.122 "num_base_bdevs_discovered": 2, 00:15:23.122 "num_base_bdevs_operational": 2, 00:15:23.122 "base_bdevs_list": [ 00:15:23.122 { 00:15:23.122 "name": "pt1", 00:15:23.122 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:23.122 "is_configured": true, 00:15:23.122 "data_offset": 256, 00:15:23.122 "data_size": 7936 00:15:23.122 }, 00:15:23.122 { 00:15:23.122 "name": "pt2", 00:15:23.122 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:23.122 "is_configured": true, 00:15:23.122 "data_offset": 256, 00:15:23.122 "data_size": 7936 00:15:23.122 } 00:15:23.122 ] 00:15:23.122 } 00:15:23.122 } 00:15:23.122 }' 00:15:23.122 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:23.122 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:23.122 pt2' 00:15:23.122 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.122 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:23.122 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.400 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.400 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:23.400 16:28:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.400 16:28:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.400 16:28:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.400 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:23.400 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:23.400 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.400 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:23.400 16:28:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.400 16:28:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.400 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.400 16:28:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.400 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:23.400 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:23.400 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:23.400 16:28:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.400 16:28:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.400 16:28:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:23.400 [2024-11-28 16:28:14.995780] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' ab593313-0bc3-4b99-aaf9-56f2195e0192 '!=' ab593313-0bc3-4b99-aaf9-56f2195e0192 ']' 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.400 [2024-11-28 16:28:15.043488] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.400 "name": "raid_bdev1", 00:15:23.400 "uuid": "ab593313-0bc3-4b99-aaf9-56f2195e0192", 00:15:23.400 "strip_size_kb": 0, 00:15:23.400 "state": "online", 00:15:23.400 "raid_level": "raid1", 00:15:23.400 "superblock": true, 00:15:23.400 "num_base_bdevs": 2, 00:15:23.400 "num_base_bdevs_discovered": 1, 00:15:23.400 "num_base_bdevs_operational": 1, 00:15:23.400 "base_bdevs_list": [ 00:15:23.400 { 00:15:23.400 "name": null, 00:15:23.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.400 "is_configured": false, 00:15:23.400 "data_offset": 0, 00:15:23.400 "data_size": 7936 00:15:23.400 }, 00:15:23.400 { 00:15:23.400 "name": "pt2", 00:15:23.400 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:23.400 "is_configured": true, 00:15:23.400 "data_offset": 256, 00:15:23.400 "data_size": 7936 00:15:23.400 } 00:15:23.400 ] 00:15:23.400 }' 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.400 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.003 [2024-11-28 16:28:15.522638] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:24.003 [2024-11-28 16:28:15.522710] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:24.003 [2024-11-28 16:28:15.522815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.003 [2024-11-28 16:28:15.522883] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:24.003 [2024-11-28 16:28:15.522932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.003 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.003 [2024-11-28 16:28:15.598530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:24.003 [2024-11-28 16:28:15.598581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.003 [2024-11-28 16:28:15.598599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:24.003 [2024-11-28 16:28:15.598607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.003 [2024-11-28 16:28:15.600965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.003 [2024-11-28 16:28:15.601002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:24.003 [2024-11-28 16:28:15.601086] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:24.003 [2024-11-28 16:28:15.601115] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:24.003 [2024-11-28 16:28:15.601184] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:24.004 [2024-11-28 16:28:15.601192] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:24.004 [2024-11-28 16:28:15.601394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:24.004 [2024-11-28 16:28:15.601501] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:24.004 [2024-11-28 16:28:15.601518] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:15:24.004 [2024-11-28 16:28:15.601628] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.004 pt2 00:15:24.004 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.004 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:24.004 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.004 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.004 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.004 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.004 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:24.004 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.004 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.004 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.004 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.004 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.004 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.004 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.004 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.004 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.004 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.004 "name": "raid_bdev1", 00:15:24.004 "uuid": "ab593313-0bc3-4b99-aaf9-56f2195e0192", 00:15:24.004 "strip_size_kb": 0, 00:15:24.004 "state": "online", 00:15:24.004 "raid_level": "raid1", 00:15:24.004 "superblock": true, 00:15:24.004 "num_base_bdevs": 2, 00:15:24.004 "num_base_bdevs_discovered": 1, 00:15:24.004 "num_base_bdevs_operational": 1, 00:15:24.004 "base_bdevs_list": [ 00:15:24.004 { 00:15:24.004 "name": null, 00:15:24.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.004 "is_configured": false, 00:15:24.004 "data_offset": 256, 00:15:24.004 "data_size": 7936 00:15:24.004 }, 00:15:24.004 { 00:15:24.004 "name": "pt2", 00:15:24.004 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:24.004 "is_configured": true, 00:15:24.004 "data_offset": 256, 00:15:24.004 "data_size": 7936 00:15:24.004 } 00:15:24.004 ] 00:15:24.004 }' 00:15:24.004 16:28:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.004 16:28:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.263 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:24.263 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.263 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.263 [2024-11-28 16:28:16.017852] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:24.263 [2024-11-28 16:28:16.017918] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:24.263 [2024-11-28 16:28:16.017984] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.263 [2024-11-28 16:28:16.018032] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:24.263 [2024-11-28 16:28:16.018080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:15:24.263 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.263 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.263 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.263 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.263 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:24.522 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.522 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:24.522 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:24.522 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:24.522 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:24.522 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.522 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.522 [2024-11-28 16:28:16.081699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:24.522 [2024-11-28 16:28:16.081789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.522 [2024-11-28 16:28:16.081840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:24.522 [2024-11-28 16:28:16.081899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.522 [2024-11-28 16:28:16.083910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.522 [2024-11-28 16:28:16.084004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:24.522 [2024-11-28 16:28:16.084083] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:24.522 [2024-11-28 16:28:16.084138] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:24.522 [2024-11-28 16:28:16.084262] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:24.522 [2024-11-28 16:28:16.084311] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:24.522 [2024-11-28 16:28:16.084327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:15:24.522 [2024-11-28 16:28:16.084361] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:24.522 [2024-11-28 16:28:16.084430] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:24.522 [2024-11-28 16:28:16.084445] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:24.522 [2024-11-28 16:28:16.084647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:24.522 [2024-11-28 16:28:16.084750] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:24.522 [2024-11-28 16:28:16.084759] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:24.522 [2024-11-28 16:28:16.084879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.522 pt1 00:15:24.522 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.523 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:24.523 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:24.523 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.523 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.523 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.523 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.523 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:24.523 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.523 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.523 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.523 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.523 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.523 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.523 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.523 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.523 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.523 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.523 "name": "raid_bdev1", 00:15:24.523 "uuid": "ab593313-0bc3-4b99-aaf9-56f2195e0192", 00:15:24.523 "strip_size_kb": 0, 00:15:24.523 "state": "online", 00:15:24.523 "raid_level": "raid1", 00:15:24.523 "superblock": true, 00:15:24.523 "num_base_bdevs": 2, 00:15:24.523 "num_base_bdevs_discovered": 1, 00:15:24.523 "num_base_bdevs_operational": 1, 00:15:24.523 "base_bdevs_list": [ 00:15:24.523 { 00:15:24.523 "name": null, 00:15:24.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.523 "is_configured": false, 00:15:24.523 "data_offset": 256, 00:15:24.523 "data_size": 7936 00:15:24.523 }, 00:15:24.523 { 00:15:24.523 "name": "pt2", 00:15:24.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:24.523 "is_configured": true, 00:15:24.523 "data_offset": 256, 00:15:24.523 "data_size": 7936 00:15:24.523 } 00:15:24.523 ] 00:15:24.523 }' 00:15:24.523 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.523 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.092 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:25.092 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:25.092 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.092 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.092 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.092 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:25.092 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:25.092 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:25.092 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.092 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.092 [2024-11-28 16:28:16.593103] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.092 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.092 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' ab593313-0bc3-4b99-aaf9-56f2195e0192 '!=' ab593313-0bc3-4b99-aaf9-56f2195e0192 ']' 00:15:25.092 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 96551 00:15:25.092 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 96551 ']' 00:15:25.092 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 96551 00:15:25.092 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:15:25.092 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:25.092 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96551 00:15:25.092 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:25.092 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:25.092 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96551' 00:15:25.092 killing process with pid 96551 00:15:25.092 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 96551 00:15:25.092 [2024-11-28 16:28:16.673865] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:25.092 [2024-11-28 16:28:16.673982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:25.092 [2024-11-28 16:28:16.674054] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:25.092 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 96551 00:15:25.092 [2024-11-28 16:28:16.674114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:25.092 [2024-11-28 16:28:16.696423] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:25.352 16:28:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:15:25.352 00:15:25.352 real 0m5.021s 00:15:25.352 user 0m8.206s 00:15:25.352 sys 0m1.086s 00:15:25.352 ************************************ 00:15:25.352 END TEST raid_superblock_test_4k 00:15:25.352 ************************************ 00:15:25.352 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:25.352 16:28:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.352 16:28:17 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:15:25.352 16:28:17 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:15:25.352 16:28:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:25.352 16:28:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:25.352 16:28:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:25.352 ************************************ 00:15:25.352 START TEST raid_rebuild_test_sb_4k 00:15:25.352 ************************************ 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=96869 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 96869 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96869 ']' 00:15:25.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:25.352 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.352 [2024-11-28 16:28:17.120303] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:25.352 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:25.352 Zero copy mechanism will not be used. 00:15:25.352 [2024-11-28 16:28:17.120552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96869 ] 00:15:25.611 [2024-11-28 16:28:17.280289] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.611 [2024-11-28 16:28:17.325440] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.611 [2024-11-28 16:28:17.368209] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.611 [2024-11-28 16:28:17.368245] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.180 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:26.180 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:15:26.180 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:26.180 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:15:26.180 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.180 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.439 BaseBdev1_malloc 00:15:26.439 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.439 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:26.439 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.439 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.439 [2024-11-28 16:28:17.958438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:26.439 [2024-11-28 16:28:17.958500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.439 [2024-11-28 16:28:17.958541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:26.439 [2024-11-28 16:28:17.958567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.439 [2024-11-28 16:28:17.960812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.439 [2024-11-28 16:28:17.960868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:26.439 BaseBdev1 00:15:26.439 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.439 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:26.439 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:15:26.439 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.439 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.439 BaseBdev2_malloc 00:15:26.439 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.439 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:26.439 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.439 16:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.439 [2024-11-28 16:28:17.996846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:26.439 [2024-11-28 16:28:17.996901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.439 [2024-11-28 16:28:17.996923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:26.439 [2024-11-28 16:28:17.996932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.439 [2024-11-28 16:28:17.999307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.439 [2024-11-28 16:28:17.999344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:26.440 BaseBdev2 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.440 spare_malloc 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.440 spare_delay 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.440 [2024-11-28 16:28:18.037387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:26.440 [2024-11-28 16:28:18.037440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.440 [2024-11-28 16:28:18.037475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:26.440 [2024-11-28 16:28:18.037484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.440 [2024-11-28 16:28:18.039563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.440 [2024-11-28 16:28:18.039599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:26.440 spare 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.440 [2024-11-28 16:28:18.049414] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:26.440 [2024-11-28 16:28:18.051176] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:26.440 [2024-11-28 16:28:18.051322] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:26.440 [2024-11-28 16:28:18.051340] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:26.440 [2024-11-28 16:28:18.051595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:26.440 [2024-11-28 16:28:18.051728] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:26.440 [2024-11-28 16:28:18.051740] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:26.440 [2024-11-28 16:28:18.051840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.440 "name": "raid_bdev1", 00:15:26.440 "uuid": "e6683172-53ae-4684-a4aa-cf00dc551796", 00:15:26.440 "strip_size_kb": 0, 00:15:26.440 "state": "online", 00:15:26.440 "raid_level": "raid1", 00:15:26.440 "superblock": true, 00:15:26.440 "num_base_bdevs": 2, 00:15:26.440 "num_base_bdevs_discovered": 2, 00:15:26.440 "num_base_bdevs_operational": 2, 00:15:26.440 "base_bdevs_list": [ 00:15:26.440 { 00:15:26.440 "name": "BaseBdev1", 00:15:26.440 "uuid": "c9dbfa8d-fd4c-5abb-812f-286dfa31a0ae", 00:15:26.440 "is_configured": true, 00:15:26.440 "data_offset": 256, 00:15:26.440 "data_size": 7936 00:15:26.440 }, 00:15:26.440 { 00:15:26.440 "name": "BaseBdev2", 00:15:26.440 "uuid": "92a58f05-2750-5cef-9e74-3e064dbf1065", 00:15:26.440 "is_configured": true, 00:15:26.440 "data_offset": 256, 00:15:26.440 "data_size": 7936 00:15:26.440 } 00:15:26.440 ] 00:15:26.440 }' 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.440 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.008 [2024-11-28 16:28:18.476933] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:27.008 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:27.008 [2024-11-28 16:28:18.748231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:27.008 /dev/nbd0 00:15:27.268 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:27.268 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:27.268 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:27.268 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:27.268 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:27.268 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:27.268 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:27.268 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:27.268 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:27.268 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:27.268 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:27.268 1+0 records in 00:15:27.268 1+0 records out 00:15:27.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566255 s, 7.2 MB/s 00:15:27.268 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.268 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:27.268 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.268 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:27.268 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:27.268 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:27.268 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:27.268 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:27.268 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:27.268 16:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:27.837 7936+0 records in 00:15:27.837 7936+0 records out 00:15:27.837 32505856 bytes (33 MB, 31 MiB) copied, 0.598128 s, 54.3 MB/s 00:15:27.837 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:27.837 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:27.837 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:27.837 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:27.837 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:27.837 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:27.837 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:28.097 [2024-11-28 16:28:19.628827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.097 [2024-11-28 16:28:19.640575] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.097 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.097 "name": "raid_bdev1", 00:15:28.097 "uuid": "e6683172-53ae-4684-a4aa-cf00dc551796", 00:15:28.097 "strip_size_kb": 0, 00:15:28.097 "state": "online", 00:15:28.097 "raid_level": "raid1", 00:15:28.097 "superblock": true, 00:15:28.097 "num_base_bdevs": 2, 00:15:28.097 "num_base_bdevs_discovered": 1, 00:15:28.097 "num_base_bdevs_operational": 1, 00:15:28.098 "base_bdevs_list": [ 00:15:28.098 { 00:15:28.098 "name": null, 00:15:28.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.098 "is_configured": false, 00:15:28.098 "data_offset": 0, 00:15:28.098 "data_size": 7936 00:15:28.098 }, 00:15:28.098 { 00:15:28.098 "name": "BaseBdev2", 00:15:28.098 "uuid": "92a58f05-2750-5cef-9e74-3e064dbf1065", 00:15:28.098 "is_configured": true, 00:15:28.098 "data_offset": 256, 00:15:28.098 "data_size": 7936 00:15:28.098 } 00:15:28.098 ] 00:15:28.098 }' 00:15:28.098 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.098 16:28:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.358 16:28:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:28.358 16:28:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.358 16:28:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.358 [2024-11-28 16:28:20.099997] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:28.358 [2024-11-28 16:28:20.104313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:15:28.358 16:28:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.358 16:28:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:28.358 [2024-11-28 16:28:20.106253] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:29.740 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.740 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.740 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.740 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.740 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.740 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.740 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.740 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.740 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.740 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.740 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.740 "name": "raid_bdev1", 00:15:29.740 "uuid": "e6683172-53ae-4684-a4aa-cf00dc551796", 00:15:29.740 "strip_size_kb": 0, 00:15:29.740 "state": "online", 00:15:29.740 "raid_level": "raid1", 00:15:29.740 "superblock": true, 00:15:29.740 "num_base_bdevs": 2, 00:15:29.740 "num_base_bdevs_discovered": 2, 00:15:29.740 "num_base_bdevs_operational": 2, 00:15:29.740 "process": { 00:15:29.740 "type": "rebuild", 00:15:29.740 "target": "spare", 00:15:29.740 "progress": { 00:15:29.740 "blocks": 2560, 00:15:29.740 "percent": 32 00:15:29.740 } 00:15:29.740 }, 00:15:29.740 "base_bdevs_list": [ 00:15:29.740 { 00:15:29.740 "name": "spare", 00:15:29.740 "uuid": "0600cbd6-e733-5f99-9093-65a5d09a9f56", 00:15:29.740 "is_configured": true, 00:15:29.740 "data_offset": 256, 00:15:29.740 "data_size": 7936 00:15:29.740 }, 00:15:29.740 { 00:15:29.740 "name": "BaseBdev2", 00:15:29.740 "uuid": "92a58f05-2750-5cef-9e74-3e064dbf1065", 00:15:29.740 "is_configured": true, 00:15:29.740 "data_offset": 256, 00:15:29.740 "data_size": 7936 00:15:29.740 } 00:15:29.740 ] 00:15:29.740 }' 00:15:29.740 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.740 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:29.740 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.740 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.740 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:29.740 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.740 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.740 [2024-11-28 16:28:21.243504] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:29.740 [2024-11-28 16:28:21.310803] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:29.740 [2024-11-28 16:28:21.310872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.740 [2024-11-28 16:28:21.310892] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:29.740 [2024-11-28 16:28:21.310899] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:29.740 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.741 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:29.741 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.741 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.741 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.741 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.741 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:29.741 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.741 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.741 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.741 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.741 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.741 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.741 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.741 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.741 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.741 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.741 "name": "raid_bdev1", 00:15:29.741 "uuid": "e6683172-53ae-4684-a4aa-cf00dc551796", 00:15:29.741 "strip_size_kb": 0, 00:15:29.741 "state": "online", 00:15:29.741 "raid_level": "raid1", 00:15:29.741 "superblock": true, 00:15:29.741 "num_base_bdevs": 2, 00:15:29.741 "num_base_bdevs_discovered": 1, 00:15:29.741 "num_base_bdevs_operational": 1, 00:15:29.741 "base_bdevs_list": [ 00:15:29.741 { 00:15:29.741 "name": null, 00:15:29.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.741 "is_configured": false, 00:15:29.741 "data_offset": 0, 00:15:29.741 "data_size": 7936 00:15:29.741 }, 00:15:29.741 { 00:15:29.741 "name": "BaseBdev2", 00:15:29.741 "uuid": "92a58f05-2750-5cef-9e74-3e064dbf1065", 00:15:29.741 "is_configured": true, 00:15:29.741 "data_offset": 256, 00:15:29.741 "data_size": 7936 00:15:29.741 } 00:15:29.741 ] 00:15:29.741 }' 00:15:29.741 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.741 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.001 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:30.001 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.001 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:30.001 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:30.001 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.001 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.001 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.001 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.001 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.001 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.261 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.261 "name": "raid_bdev1", 00:15:30.261 "uuid": "e6683172-53ae-4684-a4aa-cf00dc551796", 00:15:30.261 "strip_size_kb": 0, 00:15:30.261 "state": "online", 00:15:30.261 "raid_level": "raid1", 00:15:30.261 "superblock": true, 00:15:30.261 "num_base_bdevs": 2, 00:15:30.261 "num_base_bdevs_discovered": 1, 00:15:30.261 "num_base_bdevs_operational": 1, 00:15:30.261 "base_bdevs_list": [ 00:15:30.261 { 00:15:30.261 "name": null, 00:15:30.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.261 "is_configured": false, 00:15:30.261 "data_offset": 0, 00:15:30.261 "data_size": 7936 00:15:30.261 }, 00:15:30.261 { 00:15:30.261 "name": "BaseBdev2", 00:15:30.261 "uuid": "92a58f05-2750-5cef-9e74-3e064dbf1065", 00:15:30.261 "is_configured": true, 00:15:30.261 "data_offset": 256, 00:15:30.261 "data_size": 7936 00:15:30.261 } 00:15:30.261 ] 00:15:30.261 }' 00:15:30.261 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.261 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:30.261 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.261 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:30.261 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:30.261 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.261 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.261 [2024-11-28 16:28:21.878348] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:30.261 [2024-11-28 16:28:21.882167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:15:30.261 [2024-11-28 16:28:21.884014] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:30.261 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.261 16:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:31.201 16:28:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.201 16:28:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.201 16:28:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.201 16:28:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.201 16:28:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.201 16:28:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.201 16:28:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.201 16:28:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.201 16:28:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.201 16:28:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.201 16:28:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.201 "name": "raid_bdev1", 00:15:31.201 "uuid": "e6683172-53ae-4684-a4aa-cf00dc551796", 00:15:31.201 "strip_size_kb": 0, 00:15:31.201 "state": "online", 00:15:31.201 "raid_level": "raid1", 00:15:31.201 "superblock": true, 00:15:31.201 "num_base_bdevs": 2, 00:15:31.201 "num_base_bdevs_discovered": 2, 00:15:31.201 "num_base_bdevs_operational": 2, 00:15:31.201 "process": { 00:15:31.201 "type": "rebuild", 00:15:31.201 "target": "spare", 00:15:31.201 "progress": { 00:15:31.201 "blocks": 2560, 00:15:31.201 "percent": 32 00:15:31.201 } 00:15:31.201 }, 00:15:31.201 "base_bdevs_list": [ 00:15:31.201 { 00:15:31.201 "name": "spare", 00:15:31.201 "uuid": "0600cbd6-e733-5f99-9093-65a5d09a9f56", 00:15:31.201 "is_configured": true, 00:15:31.201 "data_offset": 256, 00:15:31.201 "data_size": 7936 00:15:31.201 }, 00:15:31.201 { 00:15:31.201 "name": "BaseBdev2", 00:15:31.201 "uuid": "92a58f05-2750-5cef-9e74-3e064dbf1065", 00:15:31.201 "is_configured": true, 00:15:31.201 "data_offset": 256, 00:15:31.201 "data_size": 7936 00:15:31.201 } 00:15:31.201 ] 00:15:31.201 }' 00:15:31.201 16:28:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.461 16:28:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.461 16:28:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.461 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.461 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:31.462 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:31.462 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:31.462 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:31.462 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:31.462 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:31.462 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=562 00:15:31.462 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:31.462 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.462 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.462 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.462 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.462 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.462 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.462 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.462 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.462 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.462 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.462 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.462 "name": "raid_bdev1", 00:15:31.462 "uuid": "e6683172-53ae-4684-a4aa-cf00dc551796", 00:15:31.462 "strip_size_kb": 0, 00:15:31.462 "state": "online", 00:15:31.462 "raid_level": "raid1", 00:15:31.462 "superblock": true, 00:15:31.462 "num_base_bdevs": 2, 00:15:31.462 "num_base_bdevs_discovered": 2, 00:15:31.462 "num_base_bdevs_operational": 2, 00:15:31.462 "process": { 00:15:31.462 "type": "rebuild", 00:15:31.462 "target": "spare", 00:15:31.462 "progress": { 00:15:31.462 "blocks": 2816, 00:15:31.462 "percent": 35 00:15:31.462 } 00:15:31.462 }, 00:15:31.462 "base_bdevs_list": [ 00:15:31.462 { 00:15:31.462 "name": "spare", 00:15:31.462 "uuid": "0600cbd6-e733-5f99-9093-65a5d09a9f56", 00:15:31.462 "is_configured": true, 00:15:31.462 "data_offset": 256, 00:15:31.462 "data_size": 7936 00:15:31.462 }, 00:15:31.462 { 00:15:31.462 "name": "BaseBdev2", 00:15:31.462 "uuid": "92a58f05-2750-5cef-9e74-3e064dbf1065", 00:15:31.462 "is_configured": true, 00:15:31.462 "data_offset": 256, 00:15:31.462 "data_size": 7936 00:15:31.462 } 00:15:31.462 ] 00:15:31.462 }' 00:15:31.462 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.462 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.462 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.462 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.462 16:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:32.844 16:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:32.844 16:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.844 16:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.844 16:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.844 16:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.844 16:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.844 16:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.844 16:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.844 16:28:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.844 16:28:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.844 16:28:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.844 16:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.844 "name": "raid_bdev1", 00:15:32.844 "uuid": "e6683172-53ae-4684-a4aa-cf00dc551796", 00:15:32.844 "strip_size_kb": 0, 00:15:32.844 "state": "online", 00:15:32.844 "raid_level": "raid1", 00:15:32.844 "superblock": true, 00:15:32.844 "num_base_bdevs": 2, 00:15:32.844 "num_base_bdevs_discovered": 2, 00:15:32.844 "num_base_bdevs_operational": 2, 00:15:32.844 "process": { 00:15:32.844 "type": "rebuild", 00:15:32.844 "target": "spare", 00:15:32.844 "progress": { 00:15:32.844 "blocks": 5888, 00:15:32.844 "percent": 74 00:15:32.844 } 00:15:32.844 }, 00:15:32.844 "base_bdevs_list": [ 00:15:32.844 { 00:15:32.844 "name": "spare", 00:15:32.844 "uuid": "0600cbd6-e733-5f99-9093-65a5d09a9f56", 00:15:32.844 "is_configured": true, 00:15:32.844 "data_offset": 256, 00:15:32.844 "data_size": 7936 00:15:32.844 }, 00:15:32.844 { 00:15:32.844 "name": "BaseBdev2", 00:15:32.844 "uuid": "92a58f05-2750-5cef-9e74-3e064dbf1065", 00:15:32.844 "is_configured": true, 00:15:32.844 "data_offset": 256, 00:15:32.844 "data_size": 7936 00:15:32.844 } 00:15:32.844 ] 00:15:32.844 }' 00:15:32.844 16:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.844 16:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.844 16:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.844 16:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.844 16:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:33.414 [2024-11-28 16:28:24.994258] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:33.414 [2024-11-28 16:28:24.994411] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:33.414 [2024-11-28 16:28:24.994549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.674 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:33.674 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.674 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.674 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.674 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.674 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.674 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.674 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.674 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.674 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.674 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.674 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.674 "name": "raid_bdev1", 00:15:33.674 "uuid": "e6683172-53ae-4684-a4aa-cf00dc551796", 00:15:33.674 "strip_size_kb": 0, 00:15:33.674 "state": "online", 00:15:33.674 "raid_level": "raid1", 00:15:33.674 "superblock": true, 00:15:33.674 "num_base_bdevs": 2, 00:15:33.674 "num_base_bdevs_discovered": 2, 00:15:33.674 "num_base_bdevs_operational": 2, 00:15:33.674 "base_bdevs_list": [ 00:15:33.674 { 00:15:33.674 "name": "spare", 00:15:33.674 "uuid": "0600cbd6-e733-5f99-9093-65a5d09a9f56", 00:15:33.674 "is_configured": true, 00:15:33.674 "data_offset": 256, 00:15:33.674 "data_size": 7936 00:15:33.674 }, 00:15:33.674 { 00:15:33.674 "name": "BaseBdev2", 00:15:33.674 "uuid": "92a58f05-2750-5cef-9e74-3e064dbf1065", 00:15:33.674 "is_configured": true, 00:15:33.674 "data_offset": 256, 00:15:33.674 "data_size": 7936 00:15:33.674 } 00:15:33.674 ] 00:15:33.674 }' 00:15:33.674 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.674 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.933 "name": "raid_bdev1", 00:15:33.933 "uuid": "e6683172-53ae-4684-a4aa-cf00dc551796", 00:15:33.933 "strip_size_kb": 0, 00:15:33.933 "state": "online", 00:15:33.933 "raid_level": "raid1", 00:15:33.933 "superblock": true, 00:15:33.933 "num_base_bdevs": 2, 00:15:33.933 "num_base_bdevs_discovered": 2, 00:15:33.933 "num_base_bdevs_operational": 2, 00:15:33.933 "base_bdevs_list": [ 00:15:33.933 { 00:15:33.933 "name": "spare", 00:15:33.933 "uuid": "0600cbd6-e733-5f99-9093-65a5d09a9f56", 00:15:33.933 "is_configured": true, 00:15:33.933 "data_offset": 256, 00:15:33.933 "data_size": 7936 00:15:33.933 }, 00:15:33.933 { 00:15:33.933 "name": "BaseBdev2", 00:15:33.933 "uuid": "92a58f05-2750-5cef-9e74-3e064dbf1065", 00:15:33.933 "is_configured": true, 00:15:33.933 "data_offset": 256, 00:15:33.933 "data_size": 7936 00:15:33.933 } 00:15:33.933 ] 00:15:33.933 }' 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.933 "name": "raid_bdev1", 00:15:33.933 "uuid": "e6683172-53ae-4684-a4aa-cf00dc551796", 00:15:33.933 "strip_size_kb": 0, 00:15:33.933 "state": "online", 00:15:33.933 "raid_level": "raid1", 00:15:33.933 "superblock": true, 00:15:33.933 "num_base_bdevs": 2, 00:15:33.933 "num_base_bdevs_discovered": 2, 00:15:33.933 "num_base_bdevs_operational": 2, 00:15:33.933 "base_bdevs_list": [ 00:15:33.933 { 00:15:33.933 "name": "spare", 00:15:33.933 "uuid": "0600cbd6-e733-5f99-9093-65a5d09a9f56", 00:15:33.933 "is_configured": true, 00:15:33.933 "data_offset": 256, 00:15:33.933 "data_size": 7936 00:15:33.933 }, 00:15:33.933 { 00:15:33.933 "name": "BaseBdev2", 00:15:33.933 "uuid": "92a58f05-2750-5cef-9e74-3e064dbf1065", 00:15:33.933 "is_configured": true, 00:15:33.933 "data_offset": 256, 00:15:33.933 "data_size": 7936 00:15:33.933 } 00:15:33.933 ] 00:15:33.933 }' 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.933 16:28:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.502 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:34.502 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.502 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.502 [2024-11-28 16:28:26.064579] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:34.502 [2024-11-28 16:28:26.064651] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.502 [2024-11-28 16:28:26.064775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.502 [2024-11-28 16:28:26.064880] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.502 [2024-11-28 16:28:26.064961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:34.502 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.502 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:15:34.502 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.502 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.502 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.502 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.502 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:34.502 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:34.502 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:34.502 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:34.502 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:34.502 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:34.502 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:34.502 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:34.502 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:34.502 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:34.502 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:34.502 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:34.502 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:34.762 /dev/nbd0 00:15:34.762 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:34.762 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:34.762 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:34.762 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:34.762 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:34.762 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:34.762 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:34.762 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:34.762 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:34.762 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:34.762 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.762 1+0 records in 00:15:34.762 1+0 records out 00:15:34.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420365 s, 9.7 MB/s 00:15:34.762 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.762 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:34.762 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.762 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:34.762 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:34.762 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:34.762 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:34.762 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:35.022 /dev/nbd1 00:15:35.022 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:35.023 1+0 records in 00:15:35.023 1+0 records out 00:15:35.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030554 s, 13.4 MB/s 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.023 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:35.283 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:35.283 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:35.283 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:35.283 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.283 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.283 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:35.283 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:35.284 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.284 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.284 16:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:35.545 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:35.545 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:35.545 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:35.545 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.545 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.545 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:35.545 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:35.545 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.545 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:35.545 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:35.545 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.545 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.545 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.545 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:35.545 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.545 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.545 [2024-11-28 16:28:27.136697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:35.545 [2024-11-28 16:28:27.136802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.545 [2024-11-28 16:28:27.136826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:35.546 [2024-11-28 16:28:27.136853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.546 [2024-11-28 16:28:27.138997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.546 [2024-11-28 16:28:27.139037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:35.546 [2024-11-28 16:28:27.139108] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:35.546 [2024-11-28 16:28:27.139158] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:35.546 [2024-11-28 16:28:27.139266] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:35.546 spare 00:15:35.546 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.546 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:35.546 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.546 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.546 [2024-11-28 16:28:27.239158] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:35.546 [2024-11-28 16:28:27.239221] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:35.546 [2024-11-28 16:28:27.239494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:15:35.546 [2024-11-28 16:28:27.239627] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:35.546 [2024-11-28 16:28:27.239641] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:15:35.546 [2024-11-28 16:28:27.239753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.546 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.546 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:35.546 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.546 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.546 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.546 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.546 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:35.546 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.546 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.546 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.546 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.546 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.546 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.546 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.546 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.546 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.546 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.546 "name": "raid_bdev1", 00:15:35.546 "uuid": "e6683172-53ae-4684-a4aa-cf00dc551796", 00:15:35.546 "strip_size_kb": 0, 00:15:35.546 "state": "online", 00:15:35.546 "raid_level": "raid1", 00:15:35.546 "superblock": true, 00:15:35.546 "num_base_bdevs": 2, 00:15:35.546 "num_base_bdevs_discovered": 2, 00:15:35.546 "num_base_bdevs_operational": 2, 00:15:35.546 "base_bdevs_list": [ 00:15:35.546 { 00:15:35.546 "name": "spare", 00:15:35.546 "uuid": "0600cbd6-e733-5f99-9093-65a5d09a9f56", 00:15:35.546 "is_configured": true, 00:15:35.546 "data_offset": 256, 00:15:35.546 "data_size": 7936 00:15:35.546 }, 00:15:35.546 { 00:15:35.546 "name": "BaseBdev2", 00:15:35.546 "uuid": "92a58f05-2750-5cef-9e74-3e064dbf1065", 00:15:35.546 "is_configured": true, 00:15:35.546 "data_offset": 256, 00:15:35.546 "data_size": 7936 00:15:35.546 } 00:15:35.546 ] 00:15:35.546 }' 00:15:35.546 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.546 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.118 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:36.118 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.118 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:36.118 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:36.118 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.118 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.118 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.118 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.118 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.118 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.119 "name": "raid_bdev1", 00:15:36.119 "uuid": "e6683172-53ae-4684-a4aa-cf00dc551796", 00:15:36.119 "strip_size_kb": 0, 00:15:36.119 "state": "online", 00:15:36.119 "raid_level": "raid1", 00:15:36.119 "superblock": true, 00:15:36.119 "num_base_bdevs": 2, 00:15:36.119 "num_base_bdevs_discovered": 2, 00:15:36.119 "num_base_bdevs_operational": 2, 00:15:36.119 "base_bdevs_list": [ 00:15:36.119 { 00:15:36.119 "name": "spare", 00:15:36.119 "uuid": "0600cbd6-e733-5f99-9093-65a5d09a9f56", 00:15:36.119 "is_configured": true, 00:15:36.119 "data_offset": 256, 00:15:36.119 "data_size": 7936 00:15:36.119 }, 00:15:36.119 { 00:15:36.119 "name": "BaseBdev2", 00:15:36.119 "uuid": "92a58f05-2750-5cef-9e74-3e064dbf1065", 00:15:36.119 "is_configured": true, 00:15:36.119 "data_offset": 256, 00:15:36.119 "data_size": 7936 00:15:36.119 } 00:15:36.119 ] 00:15:36.119 }' 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.119 [2024-11-28 16:28:27.835608] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.119 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.380 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.380 "name": "raid_bdev1", 00:15:36.380 "uuid": "e6683172-53ae-4684-a4aa-cf00dc551796", 00:15:36.380 "strip_size_kb": 0, 00:15:36.380 "state": "online", 00:15:36.380 "raid_level": "raid1", 00:15:36.380 "superblock": true, 00:15:36.380 "num_base_bdevs": 2, 00:15:36.380 "num_base_bdevs_discovered": 1, 00:15:36.380 "num_base_bdevs_operational": 1, 00:15:36.380 "base_bdevs_list": [ 00:15:36.380 { 00:15:36.380 "name": null, 00:15:36.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.380 "is_configured": false, 00:15:36.380 "data_offset": 0, 00:15:36.380 "data_size": 7936 00:15:36.380 }, 00:15:36.380 { 00:15:36.380 "name": "BaseBdev2", 00:15:36.380 "uuid": "92a58f05-2750-5cef-9e74-3e064dbf1065", 00:15:36.380 "is_configured": true, 00:15:36.380 "data_offset": 256, 00:15:36.380 "data_size": 7936 00:15:36.380 } 00:15:36.380 ] 00:15:36.380 }' 00:15:36.380 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.380 16:28:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.644 16:28:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:36.644 16:28:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.644 16:28:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.644 [2024-11-28 16:28:28.298806] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:36.644 [2024-11-28 16:28:28.299035] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:36.644 [2024-11-28 16:28:28.299106] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:36.644 [2024-11-28 16:28:28.299166] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:36.644 [2024-11-28 16:28:28.303156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:15:36.644 16:28:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.644 16:28:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:36.644 [2024-11-28 16:28:28.305051] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:37.586 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.586 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.586 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.586 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.586 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.586 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.586 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.586 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.586 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.586 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.846 "name": "raid_bdev1", 00:15:37.846 "uuid": "e6683172-53ae-4684-a4aa-cf00dc551796", 00:15:37.846 "strip_size_kb": 0, 00:15:37.846 "state": "online", 00:15:37.846 "raid_level": "raid1", 00:15:37.846 "superblock": true, 00:15:37.846 "num_base_bdevs": 2, 00:15:37.846 "num_base_bdevs_discovered": 2, 00:15:37.846 "num_base_bdevs_operational": 2, 00:15:37.846 "process": { 00:15:37.846 "type": "rebuild", 00:15:37.846 "target": "spare", 00:15:37.846 "progress": { 00:15:37.846 "blocks": 2560, 00:15:37.846 "percent": 32 00:15:37.846 } 00:15:37.846 }, 00:15:37.846 "base_bdevs_list": [ 00:15:37.846 { 00:15:37.846 "name": "spare", 00:15:37.846 "uuid": "0600cbd6-e733-5f99-9093-65a5d09a9f56", 00:15:37.846 "is_configured": true, 00:15:37.846 "data_offset": 256, 00:15:37.846 "data_size": 7936 00:15:37.846 }, 00:15:37.846 { 00:15:37.846 "name": "BaseBdev2", 00:15:37.846 "uuid": "92a58f05-2750-5cef-9e74-3e064dbf1065", 00:15:37.846 "is_configured": true, 00:15:37.846 "data_offset": 256, 00:15:37.846 "data_size": 7936 00:15:37.846 } 00:15:37.846 ] 00:15:37.846 }' 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.846 [2024-11-28 16:28:29.442436] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.846 [2024-11-28 16:28:29.509060] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:37.846 [2024-11-28 16:28:29.509173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.846 [2024-11-28 16:28:29.509193] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.846 [2024-11-28 16:28:29.509200] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.846 "name": "raid_bdev1", 00:15:37.846 "uuid": "e6683172-53ae-4684-a4aa-cf00dc551796", 00:15:37.846 "strip_size_kb": 0, 00:15:37.846 "state": "online", 00:15:37.846 "raid_level": "raid1", 00:15:37.846 "superblock": true, 00:15:37.846 "num_base_bdevs": 2, 00:15:37.846 "num_base_bdevs_discovered": 1, 00:15:37.846 "num_base_bdevs_operational": 1, 00:15:37.846 "base_bdevs_list": [ 00:15:37.846 { 00:15:37.846 "name": null, 00:15:37.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.846 "is_configured": false, 00:15:37.846 "data_offset": 0, 00:15:37.846 "data_size": 7936 00:15:37.846 }, 00:15:37.846 { 00:15:37.846 "name": "BaseBdev2", 00:15:37.846 "uuid": "92a58f05-2750-5cef-9e74-3e064dbf1065", 00:15:37.846 "is_configured": true, 00:15:37.846 "data_offset": 256, 00:15:37.846 "data_size": 7936 00:15:37.846 } 00:15:37.846 ] 00:15:37.846 }' 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.846 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:38.415 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:38.415 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.415 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:38.415 [2024-11-28 16:28:29.988257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:38.415 [2024-11-28 16:28:29.988372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.415 [2024-11-28 16:28:29.988413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:38.415 [2024-11-28 16:28:29.988441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.415 [2024-11-28 16:28:29.988901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.415 [2024-11-28 16:28:29.988959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:38.415 [2024-11-28 16:28:29.989066] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:38.415 [2024-11-28 16:28:29.989107] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:38.415 [2024-11-28 16:28:29.989152] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:38.416 [2024-11-28 16:28:29.989221] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:38.416 [2024-11-28 16:28:29.993116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:15:38.416 spare 00:15:38.416 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.416 16:28:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:38.416 [2024-11-28 16:28:29.995101] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:39.360 16:28:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.360 16:28:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.360 16:28:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.360 16:28:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.360 16:28:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.360 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.360 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.360 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.360 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.360 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.360 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.360 "name": "raid_bdev1", 00:15:39.360 "uuid": "e6683172-53ae-4684-a4aa-cf00dc551796", 00:15:39.360 "strip_size_kb": 0, 00:15:39.360 "state": "online", 00:15:39.360 "raid_level": "raid1", 00:15:39.360 "superblock": true, 00:15:39.360 "num_base_bdevs": 2, 00:15:39.360 "num_base_bdevs_discovered": 2, 00:15:39.360 "num_base_bdevs_operational": 2, 00:15:39.360 "process": { 00:15:39.360 "type": "rebuild", 00:15:39.360 "target": "spare", 00:15:39.360 "progress": { 00:15:39.360 "blocks": 2560, 00:15:39.360 "percent": 32 00:15:39.360 } 00:15:39.360 }, 00:15:39.360 "base_bdevs_list": [ 00:15:39.360 { 00:15:39.360 "name": "spare", 00:15:39.360 "uuid": "0600cbd6-e733-5f99-9093-65a5d09a9f56", 00:15:39.360 "is_configured": true, 00:15:39.360 "data_offset": 256, 00:15:39.360 "data_size": 7936 00:15:39.360 }, 00:15:39.360 { 00:15:39.360 "name": "BaseBdev2", 00:15:39.360 "uuid": "92a58f05-2750-5cef-9e74-3e064dbf1065", 00:15:39.360 "is_configured": true, 00:15:39.360 "data_offset": 256, 00:15:39.360 "data_size": 7936 00:15:39.360 } 00:15:39.360 ] 00:15:39.360 }' 00:15:39.360 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.360 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.360 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.360 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.360 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:39.360 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.360 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.620 [2024-11-28 16:28:31.131316] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:39.620 [2024-11-28 16:28:31.199051] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:39.620 [2024-11-28 16:28:31.199183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.620 [2024-11-28 16:28:31.199220] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:39.620 [2024-11-28 16:28:31.199244] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:39.620 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.620 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:39.620 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.620 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.620 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.620 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.620 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:39.620 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.621 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.621 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.621 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.621 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.621 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.621 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.621 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.621 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.621 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.621 "name": "raid_bdev1", 00:15:39.621 "uuid": "e6683172-53ae-4684-a4aa-cf00dc551796", 00:15:39.621 "strip_size_kb": 0, 00:15:39.621 "state": "online", 00:15:39.621 "raid_level": "raid1", 00:15:39.621 "superblock": true, 00:15:39.621 "num_base_bdevs": 2, 00:15:39.621 "num_base_bdevs_discovered": 1, 00:15:39.621 "num_base_bdevs_operational": 1, 00:15:39.621 "base_bdevs_list": [ 00:15:39.621 { 00:15:39.621 "name": null, 00:15:39.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.621 "is_configured": false, 00:15:39.621 "data_offset": 0, 00:15:39.621 "data_size": 7936 00:15:39.621 }, 00:15:39.621 { 00:15:39.621 "name": "BaseBdev2", 00:15:39.621 "uuid": "92a58f05-2750-5cef-9e74-3e064dbf1065", 00:15:39.621 "is_configured": true, 00:15:39.621 "data_offset": 256, 00:15:39.621 "data_size": 7936 00:15:39.621 } 00:15:39.621 ] 00:15:39.621 }' 00:15:39.621 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.621 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.881 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:39.881 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.881 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:39.881 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:39.881 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.881 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.881 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.881 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.881 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.141 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.141 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.141 "name": "raid_bdev1", 00:15:40.141 "uuid": "e6683172-53ae-4684-a4aa-cf00dc551796", 00:15:40.141 "strip_size_kb": 0, 00:15:40.141 "state": "online", 00:15:40.141 "raid_level": "raid1", 00:15:40.141 "superblock": true, 00:15:40.141 "num_base_bdevs": 2, 00:15:40.141 "num_base_bdevs_discovered": 1, 00:15:40.141 "num_base_bdevs_operational": 1, 00:15:40.141 "base_bdevs_list": [ 00:15:40.141 { 00:15:40.142 "name": null, 00:15:40.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.142 "is_configured": false, 00:15:40.142 "data_offset": 0, 00:15:40.142 "data_size": 7936 00:15:40.142 }, 00:15:40.142 { 00:15:40.142 "name": "BaseBdev2", 00:15:40.142 "uuid": "92a58f05-2750-5cef-9e74-3e064dbf1065", 00:15:40.142 "is_configured": true, 00:15:40.142 "data_offset": 256, 00:15:40.142 "data_size": 7936 00:15:40.142 } 00:15:40.142 ] 00:15:40.142 }' 00:15:40.142 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.142 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:40.142 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.142 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:40.142 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:40.142 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.142 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.142 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.142 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:40.142 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.142 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.142 [2024-11-28 16:28:31.790298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:40.142 [2024-11-28 16:28:31.790406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.142 [2024-11-28 16:28:31.790428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:40.142 [2024-11-28 16:28:31.790438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.142 [2024-11-28 16:28:31.790833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.142 [2024-11-28 16:28:31.790873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:40.142 [2024-11-28 16:28:31.790939] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:40.142 [2024-11-28 16:28:31.790958] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:40.142 [2024-11-28 16:28:31.790968] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:40.142 [2024-11-28 16:28:31.790982] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:40.142 BaseBdev1 00:15:40.142 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.142 16:28:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:41.082 16:28:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:41.082 16:28:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.082 16:28:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.082 16:28:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.082 16:28:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.082 16:28:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:41.082 16:28:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.082 16:28:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.082 16:28:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.082 16:28:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.082 16:28:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.082 16:28:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.082 16:28:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.082 16:28:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.082 16:28:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.364 16:28:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.364 "name": "raid_bdev1", 00:15:41.364 "uuid": "e6683172-53ae-4684-a4aa-cf00dc551796", 00:15:41.364 "strip_size_kb": 0, 00:15:41.364 "state": "online", 00:15:41.364 "raid_level": "raid1", 00:15:41.364 "superblock": true, 00:15:41.364 "num_base_bdevs": 2, 00:15:41.364 "num_base_bdevs_discovered": 1, 00:15:41.364 "num_base_bdevs_operational": 1, 00:15:41.364 "base_bdevs_list": [ 00:15:41.364 { 00:15:41.364 "name": null, 00:15:41.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.365 "is_configured": false, 00:15:41.365 "data_offset": 0, 00:15:41.365 "data_size": 7936 00:15:41.365 }, 00:15:41.365 { 00:15:41.365 "name": "BaseBdev2", 00:15:41.365 "uuid": "92a58f05-2750-5cef-9e74-3e064dbf1065", 00:15:41.365 "is_configured": true, 00:15:41.365 "data_offset": 256, 00:15:41.365 "data_size": 7936 00:15:41.365 } 00:15:41.365 ] 00:15:41.365 }' 00:15:41.365 16:28:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.365 16:28:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.631 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.632 "name": "raid_bdev1", 00:15:41.632 "uuid": "e6683172-53ae-4684-a4aa-cf00dc551796", 00:15:41.632 "strip_size_kb": 0, 00:15:41.632 "state": "online", 00:15:41.632 "raid_level": "raid1", 00:15:41.632 "superblock": true, 00:15:41.632 "num_base_bdevs": 2, 00:15:41.632 "num_base_bdevs_discovered": 1, 00:15:41.632 "num_base_bdevs_operational": 1, 00:15:41.632 "base_bdevs_list": [ 00:15:41.632 { 00:15:41.632 "name": null, 00:15:41.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.632 "is_configured": false, 00:15:41.632 "data_offset": 0, 00:15:41.632 "data_size": 7936 00:15:41.632 }, 00:15:41.632 { 00:15:41.632 "name": "BaseBdev2", 00:15:41.632 "uuid": "92a58f05-2750-5cef-9e74-3e064dbf1065", 00:15:41.632 "is_configured": true, 00:15:41.632 "data_offset": 256, 00:15:41.632 "data_size": 7936 00:15:41.632 } 00:15:41.632 ] 00:15:41.632 }' 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.632 [2024-11-28 16:28:33.347643] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:41.632 [2024-11-28 16:28:33.347798] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:41.632 [2024-11-28 16:28:33.347811] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:41.632 request: 00:15:41.632 { 00:15:41.632 "base_bdev": "BaseBdev1", 00:15:41.632 "raid_bdev": "raid_bdev1", 00:15:41.632 "method": "bdev_raid_add_base_bdev", 00:15:41.632 "req_id": 1 00:15:41.632 } 00:15:41.632 Got JSON-RPC error response 00:15:41.632 response: 00:15:41.632 { 00:15:41.632 "code": -22, 00:15:41.632 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:41.632 } 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:41.632 16:28:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:43.014 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:43.014 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.014 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.014 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.014 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.014 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:43.014 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.014 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.014 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.014 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.014 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.014 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.014 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.014 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.014 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.014 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.014 "name": "raid_bdev1", 00:15:43.014 "uuid": "e6683172-53ae-4684-a4aa-cf00dc551796", 00:15:43.014 "strip_size_kb": 0, 00:15:43.014 "state": "online", 00:15:43.014 "raid_level": "raid1", 00:15:43.014 "superblock": true, 00:15:43.014 "num_base_bdevs": 2, 00:15:43.014 "num_base_bdevs_discovered": 1, 00:15:43.014 "num_base_bdevs_operational": 1, 00:15:43.014 "base_bdevs_list": [ 00:15:43.014 { 00:15:43.014 "name": null, 00:15:43.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.014 "is_configured": false, 00:15:43.014 "data_offset": 0, 00:15:43.014 "data_size": 7936 00:15:43.014 }, 00:15:43.014 { 00:15:43.014 "name": "BaseBdev2", 00:15:43.014 "uuid": "92a58f05-2750-5cef-9e74-3e064dbf1065", 00:15:43.014 "is_configured": true, 00:15:43.014 "data_offset": 256, 00:15:43.014 "data_size": 7936 00:15:43.014 } 00:15:43.014 ] 00:15:43.014 }' 00:15:43.014 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.014 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.274 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:43.274 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.274 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:43.274 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:43.274 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.274 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.274 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.274 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.274 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.274 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.274 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.274 "name": "raid_bdev1", 00:15:43.274 "uuid": "e6683172-53ae-4684-a4aa-cf00dc551796", 00:15:43.274 "strip_size_kb": 0, 00:15:43.274 "state": "online", 00:15:43.274 "raid_level": "raid1", 00:15:43.274 "superblock": true, 00:15:43.274 "num_base_bdevs": 2, 00:15:43.274 "num_base_bdevs_discovered": 1, 00:15:43.274 "num_base_bdevs_operational": 1, 00:15:43.274 "base_bdevs_list": [ 00:15:43.274 { 00:15:43.274 "name": null, 00:15:43.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.274 "is_configured": false, 00:15:43.274 "data_offset": 0, 00:15:43.274 "data_size": 7936 00:15:43.274 }, 00:15:43.274 { 00:15:43.274 "name": "BaseBdev2", 00:15:43.274 "uuid": "92a58f05-2750-5cef-9e74-3e064dbf1065", 00:15:43.274 "is_configured": true, 00:15:43.274 "data_offset": 256, 00:15:43.274 "data_size": 7936 00:15:43.274 } 00:15:43.274 ] 00:15:43.274 }' 00:15:43.274 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.274 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:43.274 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.274 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:43.275 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 96869 00:15:43.275 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96869 ']' 00:15:43.275 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96869 00:15:43.275 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:15:43.275 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:43.275 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96869 00:15:43.275 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:43.275 killing process with pid 96869 00:15:43.275 Received shutdown signal, test time was about 60.000000 seconds 00:15:43.275 00:15:43.275 Latency(us) 00:15:43.275 [2024-11-28T16:28:35.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.275 [2024-11-28T16:28:35.046Z] =================================================================================================================== 00:15:43.275 [2024-11-28T16:28:35.046Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:43.275 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:43.275 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96869' 00:15:43.275 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96869 00:15:43.275 [2024-11-28 16:28:34.988808] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:43.275 [2024-11-28 16:28:34.988946] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.275 [2024-11-28 16:28:34.988998] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.275 [2024-11-28 16:28:34.989007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:43.275 16:28:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96869 00:15:43.275 [2024-11-28 16:28:35.020322] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:43.535 16:28:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:15:43.535 00:15:43.535 real 0m18.227s 00:15:43.535 user 0m24.133s 00:15:43.535 sys 0m2.591s 00:15:43.535 ************************************ 00:15:43.535 END TEST raid_rebuild_test_sb_4k 00:15:43.535 ************************************ 00:15:43.535 16:28:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:43.535 16:28:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.795 16:28:35 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:15:43.795 16:28:35 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:15:43.795 16:28:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:43.795 16:28:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:43.795 16:28:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:43.795 ************************************ 00:15:43.795 START TEST raid_state_function_test_sb_md_separate 00:15:43.795 ************************************ 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=97545 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97545' 00:15:43.795 Process raid pid: 97545 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 97545 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97545 ']' 00:15:43.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:43.795 16:28:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:43.795 [2024-11-28 16:28:35.426764] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:43.795 [2024-11-28 16:28:35.426944] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.055 [2024-11-28 16:28:35.588748] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.055 [2024-11-28 16:28:35.634895] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.055 [2024-11-28 16:28:35.678110] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.055 [2024-11-28 16:28:35.678148] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:44.626 [2024-11-28 16:28:36.263772] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:44.626 [2024-11-28 16:28:36.263841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:44.626 [2024-11-28 16:28:36.263871] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:44.626 [2024-11-28 16:28:36.263881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.626 "name": "Existed_Raid", 00:15:44.626 "uuid": "b87b56e1-83bb-40af-a7a8-2c51a9e5dbe7", 00:15:44.626 "strip_size_kb": 0, 00:15:44.626 "state": "configuring", 00:15:44.626 "raid_level": "raid1", 00:15:44.626 "superblock": true, 00:15:44.626 "num_base_bdevs": 2, 00:15:44.626 "num_base_bdevs_discovered": 0, 00:15:44.626 "num_base_bdevs_operational": 2, 00:15:44.626 "base_bdevs_list": [ 00:15:44.626 { 00:15:44.626 "name": "BaseBdev1", 00:15:44.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.626 "is_configured": false, 00:15:44.626 "data_offset": 0, 00:15:44.626 "data_size": 0 00:15:44.626 }, 00:15:44.626 { 00:15:44.626 "name": "BaseBdev2", 00:15:44.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.626 "is_configured": false, 00:15:44.626 "data_offset": 0, 00:15:44.626 "data_size": 0 00:15:44.626 } 00:15:44.626 ] 00:15:44.626 }' 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.626 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.197 [2024-11-28 16:28:36.722928] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:45.197 [2024-11-28 16:28:36.723017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.197 [2024-11-28 16:28:36.730952] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:45.197 [2024-11-28 16:28:36.731056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:45.197 [2024-11-28 16:28:36.731083] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.197 [2024-11-28 16:28:36.731105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.197 [2024-11-28 16:28:36.752276] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.197 BaseBdev1 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.197 [ 00:15:45.197 { 00:15:45.197 "name": "BaseBdev1", 00:15:45.197 "aliases": [ 00:15:45.197 "a9e8bab3-6d32-4cdb-b3dd-890ffc5279d3" 00:15:45.197 ], 00:15:45.197 "product_name": "Malloc disk", 00:15:45.197 "block_size": 4096, 00:15:45.197 "num_blocks": 8192, 00:15:45.197 "uuid": "a9e8bab3-6d32-4cdb-b3dd-890ffc5279d3", 00:15:45.197 "md_size": 32, 00:15:45.197 "md_interleave": false, 00:15:45.197 "dif_type": 0, 00:15:45.197 "assigned_rate_limits": { 00:15:45.197 "rw_ios_per_sec": 0, 00:15:45.197 "rw_mbytes_per_sec": 0, 00:15:45.197 "r_mbytes_per_sec": 0, 00:15:45.197 "w_mbytes_per_sec": 0 00:15:45.197 }, 00:15:45.197 "claimed": true, 00:15:45.197 "claim_type": "exclusive_write", 00:15:45.197 "zoned": false, 00:15:45.197 "supported_io_types": { 00:15:45.197 "read": true, 00:15:45.197 "write": true, 00:15:45.197 "unmap": true, 00:15:45.197 "flush": true, 00:15:45.197 "reset": true, 00:15:45.197 "nvme_admin": false, 00:15:45.197 "nvme_io": false, 00:15:45.197 "nvme_io_md": false, 00:15:45.197 "write_zeroes": true, 00:15:45.197 "zcopy": true, 00:15:45.197 "get_zone_info": false, 00:15:45.197 "zone_management": false, 00:15:45.197 "zone_append": false, 00:15:45.197 "compare": false, 00:15:45.197 "compare_and_write": false, 00:15:45.197 "abort": true, 00:15:45.197 "seek_hole": false, 00:15:45.197 "seek_data": false, 00:15:45.197 "copy": true, 00:15:45.197 "nvme_iov_md": false 00:15:45.197 }, 00:15:45.197 "memory_domains": [ 00:15:45.197 { 00:15:45.197 "dma_device_id": "system", 00:15:45.197 "dma_device_type": 1 00:15:45.197 }, 00:15:45.197 { 00:15:45.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.197 "dma_device_type": 2 00:15:45.197 } 00:15:45.197 ], 00:15:45.197 "driver_specific": {} 00:15:45.197 } 00:15:45.197 ] 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.197 "name": "Existed_Raid", 00:15:45.197 "uuid": "9c1c0932-3373-42da-b739-725e6f1bd96f", 00:15:45.197 "strip_size_kb": 0, 00:15:45.197 "state": "configuring", 00:15:45.197 "raid_level": "raid1", 00:15:45.197 "superblock": true, 00:15:45.197 "num_base_bdevs": 2, 00:15:45.197 "num_base_bdevs_discovered": 1, 00:15:45.197 "num_base_bdevs_operational": 2, 00:15:45.197 "base_bdevs_list": [ 00:15:45.197 { 00:15:45.197 "name": "BaseBdev1", 00:15:45.197 "uuid": "a9e8bab3-6d32-4cdb-b3dd-890ffc5279d3", 00:15:45.197 "is_configured": true, 00:15:45.197 "data_offset": 256, 00:15:45.197 "data_size": 7936 00:15:45.197 }, 00:15:45.197 { 00:15:45.197 "name": "BaseBdev2", 00:15:45.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.197 "is_configured": false, 00:15:45.197 "data_offset": 0, 00:15:45.197 "data_size": 0 00:15:45.197 } 00:15:45.197 ] 00:15:45.197 }' 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.197 16:28:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.457 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:45.458 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.458 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.458 [2024-11-28 16:28:37.223505] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:45.458 [2024-11-28 16:28:37.223584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:15:45.458 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.458 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:45.458 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.458 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.718 [2024-11-28 16:28:37.231549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.718 [2024-11-28 16:28:37.233389] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.718 [2024-11-28 16:28:37.233483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:45.718 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.718 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:45.718 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:45.718 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:45.718 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.718 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.718 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.718 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.718 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:45.718 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.718 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.718 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.718 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.718 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.718 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.718 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.718 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.718 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.718 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.718 "name": "Existed_Raid", 00:15:45.718 "uuid": "0ebf3ee0-35ae-41bd-8bb0-aa19a579a3a6", 00:15:45.718 "strip_size_kb": 0, 00:15:45.718 "state": "configuring", 00:15:45.718 "raid_level": "raid1", 00:15:45.718 "superblock": true, 00:15:45.718 "num_base_bdevs": 2, 00:15:45.718 "num_base_bdevs_discovered": 1, 00:15:45.718 "num_base_bdevs_operational": 2, 00:15:45.718 "base_bdevs_list": [ 00:15:45.718 { 00:15:45.718 "name": "BaseBdev1", 00:15:45.718 "uuid": "a9e8bab3-6d32-4cdb-b3dd-890ffc5279d3", 00:15:45.718 "is_configured": true, 00:15:45.718 "data_offset": 256, 00:15:45.718 "data_size": 7936 00:15:45.718 }, 00:15:45.718 { 00:15:45.718 "name": "BaseBdev2", 00:15:45.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.718 "is_configured": false, 00:15:45.718 "data_offset": 0, 00:15:45.718 "data_size": 0 00:15:45.718 } 00:15:45.718 ] 00:15:45.718 }' 00:15:45.718 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.718 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.979 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:15:45.979 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.979 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.979 [2024-11-28 16:28:37.722748] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:45.979 [2024-11-28 16:28:37.722975] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:45.979 [2024-11-28 16:28:37.723001] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:45.979 [2024-11-28 16:28:37.723124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:45.979 [2024-11-28 16:28:37.723238] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:45.979 [2024-11-28 16:28:37.723256] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:15:45.979 [2024-11-28 16:28:37.723336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.979 BaseBdev2 00:15:45.979 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.979 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:45.979 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:45.979 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:45.979 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:15:45.979 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:45.979 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:45.979 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:45.979 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.979 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.979 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.979 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:45.979 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.979 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.239 [ 00:15:46.239 { 00:15:46.239 "name": "BaseBdev2", 00:15:46.239 "aliases": [ 00:15:46.239 "3e9cb92d-3196-4e30-a5cf-40a9db1f2467" 00:15:46.239 ], 00:15:46.239 "product_name": "Malloc disk", 00:15:46.239 "block_size": 4096, 00:15:46.239 "num_blocks": 8192, 00:15:46.239 "uuid": "3e9cb92d-3196-4e30-a5cf-40a9db1f2467", 00:15:46.239 "md_size": 32, 00:15:46.239 "md_interleave": false, 00:15:46.239 "dif_type": 0, 00:15:46.239 "assigned_rate_limits": { 00:15:46.239 "rw_ios_per_sec": 0, 00:15:46.239 "rw_mbytes_per_sec": 0, 00:15:46.239 "r_mbytes_per_sec": 0, 00:15:46.239 "w_mbytes_per_sec": 0 00:15:46.239 }, 00:15:46.239 "claimed": true, 00:15:46.239 "claim_type": "exclusive_write", 00:15:46.239 "zoned": false, 00:15:46.239 "supported_io_types": { 00:15:46.239 "read": true, 00:15:46.239 "write": true, 00:15:46.239 "unmap": true, 00:15:46.239 "flush": true, 00:15:46.239 "reset": true, 00:15:46.239 "nvme_admin": false, 00:15:46.239 "nvme_io": false, 00:15:46.239 "nvme_io_md": false, 00:15:46.239 "write_zeroes": true, 00:15:46.239 "zcopy": true, 00:15:46.239 "get_zone_info": false, 00:15:46.239 "zone_management": false, 00:15:46.239 "zone_append": false, 00:15:46.239 "compare": false, 00:15:46.239 "compare_and_write": false, 00:15:46.239 "abort": true, 00:15:46.239 "seek_hole": false, 00:15:46.239 "seek_data": false, 00:15:46.239 "copy": true, 00:15:46.239 "nvme_iov_md": false 00:15:46.239 }, 00:15:46.239 "memory_domains": [ 00:15:46.239 { 00:15:46.239 "dma_device_id": "system", 00:15:46.239 "dma_device_type": 1 00:15:46.239 }, 00:15:46.239 { 00:15:46.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.239 "dma_device_type": 2 00:15:46.239 } 00:15:46.239 ], 00:15:46.239 "driver_specific": {} 00:15:46.239 } 00:15:46.239 ] 00:15:46.239 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.239 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:15:46.239 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:46.239 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:46.239 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:46.239 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.239 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.239 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.239 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.239 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:46.239 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.239 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.239 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.239 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.239 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.239 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.239 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.239 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.239 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.239 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.239 "name": "Existed_Raid", 00:15:46.239 "uuid": "0ebf3ee0-35ae-41bd-8bb0-aa19a579a3a6", 00:15:46.239 "strip_size_kb": 0, 00:15:46.239 "state": "online", 00:15:46.239 "raid_level": "raid1", 00:15:46.239 "superblock": true, 00:15:46.239 "num_base_bdevs": 2, 00:15:46.239 "num_base_bdevs_discovered": 2, 00:15:46.239 "num_base_bdevs_operational": 2, 00:15:46.239 "base_bdevs_list": [ 00:15:46.239 { 00:15:46.239 "name": "BaseBdev1", 00:15:46.239 "uuid": "a9e8bab3-6d32-4cdb-b3dd-890ffc5279d3", 00:15:46.239 "is_configured": true, 00:15:46.239 "data_offset": 256, 00:15:46.239 "data_size": 7936 00:15:46.239 }, 00:15:46.239 { 00:15:46.239 "name": "BaseBdev2", 00:15:46.239 "uuid": "3e9cb92d-3196-4e30-a5cf-40a9db1f2467", 00:15:46.239 "is_configured": true, 00:15:46.239 "data_offset": 256, 00:15:46.239 "data_size": 7936 00:15:46.239 } 00:15:46.239 ] 00:15:46.239 }' 00:15:46.240 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.240 16:28:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.499 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:46.500 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:46.500 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:46.500 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:46.500 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:46.500 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:46.500 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:46.500 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.500 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.500 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:46.500 [2024-11-28 16:28:38.126341] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.500 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.500 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:46.500 "name": "Existed_Raid", 00:15:46.500 "aliases": [ 00:15:46.500 "0ebf3ee0-35ae-41bd-8bb0-aa19a579a3a6" 00:15:46.500 ], 00:15:46.500 "product_name": "Raid Volume", 00:15:46.500 "block_size": 4096, 00:15:46.500 "num_blocks": 7936, 00:15:46.500 "uuid": "0ebf3ee0-35ae-41bd-8bb0-aa19a579a3a6", 00:15:46.500 "md_size": 32, 00:15:46.500 "md_interleave": false, 00:15:46.500 "dif_type": 0, 00:15:46.500 "assigned_rate_limits": { 00:15:46.500 "rw_ios_per_sec": 0, 00:15:46.500 "rw_mbytes_per_sec": 0, 00:15:46.500 "r_mbytes_per_sec": 0, 00:15:46.500 "w_mbytes_per_sec": 0 00:15:46.500 }, 00:15:46.500 "claimed": false, 00:15:46.500 "zoned": false, 00:15:46.500 "supported_io_types": { 00:15:46.500 "read": true, 00:15:46.500 "write": true, 00:15:46.500 "unmap": false, 00:15:46.500 "flush": false, 00:15:46.500 "reset": true, 00:15:46.500 "nvme_admin": false, 00:15:46.500 "nvme_io": false, 00:15:46.500 "nvme_io_md": false, 00:15:46.500 "write_zeroes": true, 00:15:46.500 "zcopy": false, 00:15:46.500 "get_zone_info": false, 00:15:46.500 "zone_management": false, 00:15:46.500 "zone_append": false, 00:15:46.500 "compare": false, 00:15:46.500 "compare_and_write": false, 00:15:46.500 "abort": false, 00:15:46.500 "seek_hole": false, 00:15:46.500 "seek_data": false, 00:15:46.500 "copy": false, 00:15:46.500 "nvme_iov_md": false 00:15:46.500 }, 00:15:46.500 "memory_domains": [ 00:15:46.500 { 00:15:46.500 "dma_device_id": "system", 00:15:46.500 "dma_device_type": 1 00:15:46.500 }, 00:15:46.500 { 00:15:46.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.500 "dma_device_type": 2 00:15:46.500 }, 00:15:46.500 { 00:15:46.500 "dma_device_id": "system", 00:15:46.500 "dma_device_type": 1 00:15:46.500 }, 00:15:46.500 { 00:15:46.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.500 "dma_device_type": 2 00:15:46.500 } 00:15:46.500 ], 00:15:46.500 "driver_specific": { 00:15:46.500 "raid": { 00:15:46.500 "uuid": "0ebf3ee0-35ae-41bd-8bb0-aa19a579a3a6", 00:15:46.500 "strip_size_kb": 0, 00:15:46.500 "state": "online", 00:15:46.500 "raid_level": "raid1", 00:15:46.500 "superblock": true, 00:15:46.500 "num_base_bdevs": 2, 00:15:46.500 "num_base_bdevs_discovered": 2, 00:15:46.500 "num_base_bdevs_operational": 2, 00:15:46.500 "base_bdevs_list": [ 00:15:46.500 { 00:15:46.500 "name": "BaseBdev1", 00:15:46.500 "uuid": "a9e8bab3-6d32-4cdb-b3dd-890ffc5279d3", 00:15:46.500 "is_configured": true, 00:15:46.500 "data_offset": 256, 00:15:46.500 "data_size": 7936 00:15:46.500 }, 00:15:46.500 { 00:15:46.500 "name": "BaseBdev2", 00:15:46.500 "uuid": "3e9cb92d-3196-4e30-a5cf-40a9db1f2467", 00:15:46.500 "is_configured": true, 00:15:46.500 "data_offset": 256, 00:15:46.500 "data_size": 7936 00:15:46.500 } 00:15:46.500 ] 00:15:46.500 } 00:15:46.500 } 00:15:46.500 }' 00:15:46.500 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:46.500 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:46.500 BaseBdev2' 00:15:46.500 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.500 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:46.500 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.500 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:46.500 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.500 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.500 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.763 [2024-11-28 16:28:38.353768] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.763 "name": "Existed_Raid", 00:15:46.763 "uuid": "0ebf3ee0-35ae-41bd-8bb0-aa19a579a3a6", 00:15:46.763 "strip_size_kb": 0, 00:15:46.763 "state": "online", 00:15:46.763 "raid_level": "raid1", 00:15:46.763 "superblock": true, 00:15:46.763 "num_base_bdevs": 2, 00:15:46.763 "num_base_bdevs_discovered": 1, 00:15:46.763 "num_base_bdevs_operational": 1, 00:15:46.763 "base_bdevs_list": [ 00:15:46.763 { 00:15:46.763 "name": null, 00:15:46.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.763 "is_configured": false, 00:15:46.763 "data_offset": 0, 00:15:46.763 "data_size": 7936 00:15:46.763 }, 00:15:46.763 { 00:15:46.763 "name": "BaseBdev2", 00:15:46.763 "uuid": "3e9cb92d-3196-4e30-a5cf-40a9db1f2467", 00:15:46.763 "is_configured": true, 00:15:46.763 "data_offset": 256, 00:15:46.763 "data_size": 7936 00:15:46.763 } 00:15:46.763 ] 00:15:46.763 }' 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.763 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.428 [2024-11-28 16:28:38.865094] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:47.428 [2024-11-28 16:28:38.865243] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.428 [2024-11-28 16:28:38.877668] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.428 [2024-11-28 16:28:38.877722] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.428 [2024-11-28 16:28:38.877735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 97545 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97545 ']' 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 97545 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97545 00:15:47.428 killing process with pid 97545 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97545' 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 97545 00:15:47.428 [2024-11-28 16:28:38.964429] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:47.428 16:28:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 97545 00:15:47.428 [2024-11-28 16:28:38.965387] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:47.689 16:28:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:15:47.689 00:15:47.689 real 0m3.887s 00:15:47.689 user 0m6.045s 00:15:47.689 sys 0m0.846s 00:15:47.689 ************************************ 00:15:47.689 END TEST raid_state_function_test_sb_md_separate 00:15:47.689 ************************************ 00:15:47.689 16:28:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:47.689 16:28:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.689 16:28:39 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:15:47.689 16:28:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:47.689 16:28:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:47.689 16:28:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:47.689 ************************************ 00:15:47.689 START TEST raid_superblock_test_md_separate 00:15:47.689 ************************************ 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=97786 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 97786 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97786 ']' 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:47.689 16:28:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.689 [2024-11-28 16:28:39.399606] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:47.689 [2024-11-28 16:28:39.399815] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97786 ] 00:15:47.949 [2024-11-28 16:28:39.563957] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.949 [2024-11-28 16:28:39.610534] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.949 [2024-11-28 16:28:39.653165] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.949 [2024-11-28 16:28:39.653215] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:48.520 malloc1 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:48.520 [2024-11-28 16:28:40.228031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:48.520 [2024-11-28 16:28:40.228090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.520 [2024-11-28 16:28:40.228133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:48.520 [2024-11-28 16:28:40.228144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.520 [2024-11-28 16:28:40.230101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.520 [2024-11-28 16:28:40.230142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:48.520 pt1 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:48.520 malloc2 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:48.520 [2024-11-28 16:28:40.266286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:48.520 [2024-11-28 16:28:40.266390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.520 [2024-11-28 16:28:40.266423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:48.520 [2024-11-28 16:28:40.266452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.520 [2024-11-28 16:28:40.268306] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.520 [2024-11-28 16:28:40.268383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:48.520 pt2 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:48.520 [2024-11-28 16:28:40.278288] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:48.520 [2024-11-28 16:28:40.280118] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:48.520 [2024-11-28 16:28:40.280314] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:48.520 [2024-11-28 16:28:40.280365] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:48.520 [2024-11-28 16:28:40.280468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:48.520 [2024-11-28 16:28:40.280596] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:48.520 [2024-11-28 16:28:40.280639] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:48.520 [2024-11-28 16:28:40.280773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.520 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.780 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.780 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.780 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:48.781 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.781 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.781 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.781 "name": "raid_bdev1", 00:15:48.781 "uuid": "0a24de5e-1d43-4ac4-b6c6-02cec9fab53e", 00:15:48.781 "strip_size_kb": 0, 00:15:48.781 "state": "online", 00:15:48.781 "raid_level": "raid1", 00:15:48.781 "superblock": true, 00:15:48.781 "num_base_bdevs": 2, 00:15:48.781 "num_base_bdevs_discovered": 2, 00:15:48.781 "num_base_bdevs_operational": 2, 00:15:48.781 "base_bdevs_list": [ 00:15:48.781 { 00:15:48.781 "name": "pt1", 00:15:48.781 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:48.781 "is_configured": true, 00:15:48.781 "data_offset": 256, 00:15:48.781 "data_size": 7936 00:15:48.781 }, 00:15:48.781 { 00:15:48.781 "name": "pt2", 00:15:48.781 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.781 "is_configured": true, 00:15:48.781 "data_offset": 256, 00:15:48.781 "data_size": 7936 00:15:48.781 } 00:15:48.781 ] 00:15:48.781 }' 00:15:48.781 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.781 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.041 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:49.041 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:49.041 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:49.042 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:49.042 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:49.042 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:49.042 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:49.042 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.042 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.042 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:49.042 [2024-11-28 16:28:40.717784] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.042 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.042 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:49.042 "name": "raid_bdev1", 00:15:49.042 "aliases": [ 00:15:49.042 "0a24de5e-1d43-4ac4-b6c6-02cec9fab53e" 00:15:49.042 ], 00:15:49.042 "product_name": "Raid Volume", 00:15:49.042 "block_size": 4096, 00:15:49.042 "num_blocks": 7936, 00:15:49.042 "uuid": "0a24de5e-1d43-4ac4-b6c6-02cec9fab53e", 00:15:49.042 "md_size": 32, 00:15:49.042 "md_interleave": false, 00:15:49.042 "dif_type": 0, 00:15:49.042 "assigned_rate_limits": { 00:15:49.042 "rw_ios_per_sec": 0, 00:15:49.042 "rw_mbytes_per_sec": 0, 00:15:49.042 "r_mbytes_per_sec": 0, 00:15:49.042 "w_mbytes_per_sec": 0 00:15:49.042 }, 00:15:49.042 "claimed": false, 00:15:49.042 "zoned": false, 00:15:49.042 "supported_io_types": { 00:15:49.042 "read": true, 00:15:49.042 "write": true, 00:15:49.042 "unmap": false, 00:15:49.042 "flush": false, 00:15:49.042 "reset": true, 00:15:49.042 "nvme_admin": false, 00:15:49.042 "nvme_io": false, 00:15:49.042 "nvme_io_md": false, 00:15:49.042 "write_zeroes": true, 00:15:49.042 "zcopy": false, 00:15:49.042 "get_zone_info": false, 00:15:49.042 "zone_management": false, 00:15:49.042 "zone_append": false, 00:15:49.042 "compare": false, 00:15:49.042 "compare_and_write": false, 00:15:49.042 "abort": false, 00:15:49.042 "seek_hole": false, 00:15:49.042 "seek_data": false, 00:15:49.042 "copy": false, 00:15:49.042 "nvme_iov_md": false 00:15:49.042 }, 00:15:49.042 "memory_domains": [ 00:15:49.042 { 00:15:49.042 "dma_device_id": "system", 00:15:49.042 "dma_device_type": 1 00:15:49.042 }, 00:15:49.042 { 00:15:49.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.042 "dma_device_type": 2 00:15:49.042 }, 00:15:49.042 { 00:15:49.042 "dma_device_id": "system", 00:15:49.042 "dma_device_type": 1 00:15:49.042 }, 00:15:49.042 { 00:15:49.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.042 "dma_device_type": 2 00:15:49.042 } 00:15:49.042 ], 00:15:49.042 "driver_specific": { 00:15:49.042 "raid": { 00:15:49.042 "uuid": "0a24de5e-1d43-4ac4-b6c6-02cec9fab53e", 00:15:49.042 "strip_size_kb": 0, 00:15:49.042 "state": "online", 00:15:49.042 "raid_level": "raid1", 00:15:49.042 "superblock": true, 00:15:49.042 "num_base_bdevs": 2, 00:15:49.042 "num_base_bdevs_discovered": 2, 00:15:49.042 "num_base_bdevs_operational": 2, 00:15:49.042 "base_bdevs_list": [ 00:15:49.042 { 00:15:49.042 "name": "pt1", 00:15:49.042 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:49.042 "is_configured": true, 00:15:49.042 "data_offset": 256, 00:15:49.042 "data_size": 7936 00:15:49.042 }, 00:15:49.042 { 00:15:49.042 "name": "pt2", 00:15:49.042 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.042 "is_configured": true, 00:15:49.042 "data_offset": 256, 00:15:49.042 "data_size": 7936 00:15:49.042 } 00:15:49.042 ] 00:15:49.042 } 00:15:49.042 } 00:15:49.042 }' 00:15:49.042 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:49.042 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:49.042 pt2' 00:15:49.042 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.303 [2024-11-28 16:28:40.933326] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0a24de5e-1d43-4ac4-b6c6-02cec9fab53e 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 0a24de5e-1d43-4ac4-b6c6-02cec9fab53e ']' 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.303 [2024-11-28 16:28:40.977042] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:49.303 [2024-11-28 16:28:40.977109] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:49.303 [2024-11-28 16:28:40.977197] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.303 [2024-11-28 16:28:40.977271] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.303 [2024-11-28 16:28:40.977314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.303 16:28:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.303 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:49.303 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:49.303 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:49.303 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:49.303 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.303 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.303 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.303 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:49.303 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:49.303 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.303 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.303 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.303 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:49.303 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.303 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.303 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:49.303 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.563 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:49.563 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:49.563 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.564 [2024-11-28 16:28:41.112824] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:49.564 [2024-11-28 16:28:41.114655] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:49.564 [2024-11-28 16:28:41.114775] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:49.564 [2024-11-28 16:28:41.114837] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:49.564 [2024-11-28 16:28:41.114857] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:49.564 [2024-11-28 16:28:41.114866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:15:49.564 request: 00:15:49.564 { 00:15:49.564 "name": "raid_bdev1", 00:15:49.564 "raid_level": "raid1", 00:15:49.564 "base_bdevs": [ 00:15:49.564 "malloc1", 00:15:49.564 "malloc2" 00:15:49.564 ], 00:15:49.564 "superblock": false, 00:15:49.564 "method": "bdev_raid_create", 00:15:49.564 "req_id": 1 00:15:49.564 } 00:15:49.564 Got JSON-RPC error response 00:15:49.564 response: 00:15:49.564 { 00:15:49.564 "code": -17, 00:15:49.564 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:49.564 } 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.564 [2024-11-28 16:28:41.180679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:49.564 [2024-11-28 16:28:41.180766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.564 [2024-11-28 16:28:41.180800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:49.564 [2024-11-28 16:28:41.180826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.564 [2024-11-28 16:28:41.182733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.564 [2024-11-28 16:28:41.182801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:49.564 [2024-11-28 16:28:41.182875] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:49.564 [2024-11-28 16:28:41.182928] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:49.564 pt1 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.564 "name": "raid_bdev1", 00:15:49.564 "uuid": "0a24de5e-1d43-4ac4-b6c6-02cec9fab53e", 00:15:49.564 "strip_size_kb": 0, 00:15:49.564 "state": "configuring", 00:15:49.564 "raid_level": "raid1", 00:15:49.564 "superblock": true, 00:15:49.564 "num_base_bdevs": 2, 00:15:49.564 "num_base_bdevs_discovered": 1, 00:15:49.564 "num_base_bdevs_operational": 2, 00:15:49.564 "base_bdevs_list": [ 00:15:49.564 { 00:15:49.564 "name": "pt1", 00:15:49.564 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:49.564 "is_configured": true, 00:15:49.564 "data_offset": 256, 00:15:49.564 "data_size": 7936 00:15:49.564 }, 00:15:49.564 { 00:15:49.564 "name": null, 00:15:49.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.564 "is_configured": false, 00:15:49.564 "data_offset": 256, 00:15:49.564 "data_size": 7936 00:15:49.564 } 00:15:49.564 ] 00:15:49.564 }' 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.564 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.134 [2024-11-28 16:28:41.632002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:50.134 [2024-11-28 16:28:41.632053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.134 [2024-11-28 16:28:41.632089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:50.134 [2024-11-28 16:28:41.632098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.134 [2024-11-28 16:28:41.632240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.134 [2024-11-28 16:28:41.632252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:50.134 [2024-11-28 16:28:41.632290] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:50.134 [2024-11-28 16:28:41.632305] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:50.134 [2024-11-28 16:28:41.632381] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:50.134 [2024-11-28 16:28:41.632389] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:50.134 [2024-11-28 16:28:41.632453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:50.134 [2024-11-28 16:28:41.632528] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:50.134 [2024-11-28 16:28:41.632539] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:15:50.134 [2024-11-28 16:28:41.632597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.134 pt2 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.134 "name": "raid_bdev1", 00:15:50.134 "uuid": "0a24de5e-1d43-4ac4-b6c6-02cec9fab53e", 00:15:50.134 "strip_size_kb": 0, 00:15:50.134 "state": "online", 00:15:50.134 "raid_level": "raid1", 00:15:50.134 "superblock": true, 00:15:50.134 "num_base_bdevs": 2, 00:15:50.134 "num_base_bdevs_discovered": 2, 00:15:50.134 "num_base_bdevs_operational": 2, 00:15:50.134 "base_bdevs_list": [ 00:15:50.134 { 00:15:50.134 "name": "pt1", 00:15:50.134 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:50.134 "is_configured": true, 00:15:50.134 "data_offset": 256, 00:15:50.134 "data_size": 7936 00:15:50.134 }, 00:15:50.134 { 00:15:50.134 "name": "pt2", 00:15:50.134 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.134 "is_configured": true, 00:15:50.134 "data_offset": 256, 00:15:50.134 "data_size": 7936 00:15:50.134 } 00:15:50.134 ] 00:15:50.134 }' 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.134 16:28:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.395 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:50.395 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:50.395 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:50.395 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:50.395 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:50.395 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:50.395 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:50.395 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:50.395 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.395 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.395 [2024-11-28 16:28:42.087441] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.395 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.395 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:50.395 "name": "raid_bdev1", 00:15:50.395 "aliases": [ 00:15:50.395 "0a24de5e-1d43-4ac4-b6c6-02cec9fab53e" 00:15:50.395 ], 00:15:50.395 "product_name": "Raid Volume", 00:15:50.395 "block_size": 4096, 00:15:50.395 "num_blocks": 7936, 00:15:50.395 "uuid": "0a24de5e-1d43-4ac4-b6c6-02cec9fab53e", 00:15:50.395 "md_size": 32, 00:15:50.395 "md_interleave": false, 00:15:50.395 "dif_type": 0, 00:15:50.395 "assigned_rate_limits": { 00:15:50.395 "rw_ios_per_sec": 0, 00:15:50.395 "rw_mbytes_per_sec": 0, 00:15:50.395 "r_mbytes_per_sec": 0, 00:15:50.395 "w_mbytes_per_sec": 0 00:15:50.395 }, 00:15:50.395 "claimed": false, 00:15:50.395 "zoned": false, 00:15:50.395 "supported_io_types": { 00:15:50.395 "read": true, 00:15:50.395 "write": true, 00:15:50.395 "unmap": false, 00:15:50.395 "flush": false, 00:15:50.395 "reset": true, 00:15:50.395 "nvme_admin": false, 00:15:50.395 "nvme_io": false, 00:15:50.395 "nvme_io_md": false, 00:15:50.395 "write_zeroes": true, 00:15:50.395 "zcopy": false, 00:15:50.395 "get_zone_info": false, 00:15:50.395 "zone_management": false, 00:15:50.395 "zone_append": false, 00:15:50.395 "compare": false, 00:15:50.395 "compare_and_write": false, 00:15:50.395 "abort": false, 00:15:50.395 "seek_hole": false, 00:15:50.395 "seek_data": false, 00:15:50.395 "copy": false, 00:15:50.395 "nvme_iov_md": false 00:15:50.395 }, 00:15:50.395 "memory_domains": [ 00:15:50.395 { 00:15:50.395 "dma_device_id": "system", 00:15:50.395 "dma_device_type": 1 00:15:50.395 }, 00:15:50.395 { 00:15:50.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.395 "dma_device_type": 2 00:15:50.395 }, 00:15:50.395 { 00:15:50.395 "dma_device_id": "system", 00:15:50.395 "dma_device_type": 1 00:15:50.395 }, 00:15:50.395 { 00:15:50.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.395 "dma_device_type": 2 00:15:50.395 } 00:15:50.395 ], 00:15:50.395 "driver_specific": { 00:15:50.395 "raid": { 00:15:50.395 "uuid": "0a24de5e-1d43-4ac4-b6c6-02cec9fab53e", 00:15:50.395 "strip_size_kb": 0, 00:15:50.395 "state": "online", 00:15:50.395 "raid_level": "raid1", 00:15:50.395 "superblock": true, 00:15:50.395 "num_base_bdevs": 2, 00:15:50.395 "num_base_bdevs_discovered": 2, 00:15:50.395 "num_base_bdevs_operational": 2, 00:15:50.395 "base_bdevs_list": [ 00:15:50.395 { 00:15:50.395 "name": "pt1", 00:15:50.395 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:50.395 "is_configured": true, 00:15:50.395 "data_offset": 256, 00:15:50.395 "data_size": 7936 00:15:50.395 }, 00:15:50.395 { 00:15:50.395 "name": "pt2", 00:15:50.395 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.395 "is_configured": true, 00:15:50.395 "data_offset": 256, 00:15:50.395 "data_size": 7936 00:15:50.395 } 00:15:50.395 ] 00:15:50.395 } 00:15:50.395 } 00:15:50.395 }' 00:15:50.395 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:50.656 pt2' 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:50.656 [2024-11-28 16:28:42.331027] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 0a24de5e-1d43-4ac4-b6c6-02cec9fab53e '!=' 0a24de5e-1d43-4ac4-b6c6-02cec9fab53e ']' 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.656 [2024-11-28 16:28:42.382711] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.656 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.917 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.917 "name": "raid_bdev1", 00:15:50.917 "uuid": "0a24de5e-1d43-4ac4-b6c6-02cec9fab53e", 00:15:50.917 "strip_size_kb": 0, 00:15:50.917 "state": "online", 00:15:50.917 "raid_level": "raid1", 00:15:50.917 "superblock": true, 00:15:50.917 "num_base_bdevs": 2, 00:15:50.917 "num_base_bdevs_discovered": 1, 00:15:50.917 "num_base_bdevs_operational": 1, 00:15:50.917 "base_bdevs_list": [ 00:15:50.917 { 00:15:50.917 "name": null, 00:15:50.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.917 "is_configured": false, 00:15:50.917 "data_offset": 0, 00:15:50.917 "data_size": 7936 00:15:50.917 }, 00:15:50.917 { 00:15:50.917 "name": "pt2", 00:15:50.917 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.917 "is_configured": true, 00:15:50.917 "data_offset": 256, 00:15:50.917 "data_size": 7936 00:15:50.917 } 00:15:50.917 ] 00:15:50.917 }' 00:15:50.917 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.917 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.178 [2024-11-28 16:28:42.825935] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.178 [2024-11-28 16:28:42.826004] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.178 [2024-11-28 16:28:42.826078] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.178 [2024-11-28 16:28:42.826117] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.178 [2024-11-28 16:28:42.826126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.178 [2024-11-28 16:28:42.893851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:51.178 [2024-11-28 16:28:42.893939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.178 [2024-11-28 16:28:42.893979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:51.178 [2024-11-28 16:28:42.894009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.178 [2024-11-28 16:28:42.895925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.178 [2024-11-28 16:28:42.896003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:51.178 [2024-11-28 16:28:42.896077] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:51.178 [2024-11-28 16:28:42.896138] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:51.178 [2024-11-28 16:28:42.896226] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:51.178 [2024-11-28 16:28:42.896262] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:51.178 [2024-11-28 16:28:42.896354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:51.178 [2024-11-28 16:28:42.896467] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:51.178 [2024-11-28 16:28:42.896504] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:15:51.178 [2024-11-28 16:28:42.896598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.178 pt2 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.178 "name": "raid_bdev1", 00:15:51.178 "uuid": "0a24de5e-1d43-4ac4-b6c6-02cec9fab53e", 00:15:51.178 "strip_size_kb": 0, 00:15:51.178 "state": "online", 00:15:51.178 "raid_level": "raid1", 00:15:51.178 "superblock": true, 00:15:51.178 "num_base_bdevs": 2, 00:15:51.178 "num_base_bdevs_discovered": 1, 00:15:51.178 "num_base_bdevs_operational": 1, 00:15:51.178 "base_bdevs_list": [ 00:15:51.178 { 00:15:51.178 "name": null, 00:15:51.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.178 "is_configured": false, 00:15:51.178 "data_offset": 256, 00:15:51.178 "data_size": 7936 00:15:51.178 }, 00:15:51.178 { 00:15:51.178 "name": "pt2", 00:15:51.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.178 "is_configured": true, 00:15:51.178 "data_offset": 256, 00:15:51.178 "data_size": 7936 00:15:51.178 } 00:15:51.178 ] 00:15:51.178 }' 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.178 16:28:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.750 [2024-11-28 16:28:43.341072] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.750 [2024-11-28 16:28:43.341137] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.750 [2024-11-28 16:28:43.341205] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.750 [2024-11-28 16:28:43.341257] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.750 [2024-11-28 16:28:43.341290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.750 [2024-11-28 16:28:43.404944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:51.750 [2024-11-28 16:28:43.404994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.750 [2024-11-28 16:28:43.405028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:51.750 [2024-11-28 16:28:43.405040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.750 [2024-11-28 16:28:43.406845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.750 [2024-11-28 16:28:43.406892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:51.750 [2024-11-28 16:28:43.406936] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:51.750 [2024-11-28 16:28:43.406972] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:51.750 [2024-11-28 16:28:43.407065] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:51.750 [2024-11-28 16:28:43.407077] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.750 [2024-11-28 16:28:43.407091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:15:51.750 [2024-11-28 16:28:43.407124] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:51.750 [2024-11-28 16:28:43.407177] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:51.750 [2024-11-28 16:28:43.407189] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:51.750 [2024-11-28 16:28:43.407252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:51.750 [2024-11-28 16:28:43.407324] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:51.750 [2024-11-28 16:28:43.407331] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:51.750 [2024-11-28 16:28:43.407400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.750 pt1 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.750 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.750 "name": "raid_bdev1", 00:15:51.750 "uuid": "0a24de5e-1d43-4ac4-b6c6-02cec9fab53e", 00:15:51.751 "strip_size_kb": 0, 00:15:51.751 "state": "online", 00:15:51.751 "raid_level": "raid1", 00:15:51.751 "superblock": true, 00:15:51.751 "num_base_bdevs": 2, 00:15:51.751 "num_base_bdevs_discovered": 1, 00:15:51.751 "num_base_bdevs_operational": 1, 00:15:51.751 "base_bdevs_list": [ 00:15:51.751 { 00:15:51.751 "name": null, 00:15:51.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.751 "is_configured": false, 00:15:51.751 "data_offset": 256, 00:15:51.751 "data_size": 7936 00:15:51.751 }, 00:15:51.751 { 00:15:51.751 "name": "pt2", 00:15:51.751 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.751 "is_configured": true, 00:15:51.751 "data_offset": 256, 00:15:51.751 "data_size": 7936 00:15:51.751 } 00:15:51.751 ] 00:15:51.751 }' 00:15:51.751 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.751 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.011 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:52.011 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:52.011 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.011 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.271 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.271 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:52.271 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:52.271 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:52.271 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.271 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.271 [2024-11-28 16:28:43.836589] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.271 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.271 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 0a24de5e-1d43-4ac4-b6c6-02cec9fab53e '!=' 0a24de5e-1d43-4ac4-b6c6-02cec9fab53e ']' 00:15:52.271 16:28:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 97786 00:15:52.271 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97786 ']' 00:15:52.271 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 97786 00:15:52.271 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:15:52.271 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:52.271 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97786 00:15:52.271 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:52.271 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:52.271 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97786' 00:15:52.271 killing process with pid 97786 00:15:52.271 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 97786 00:15:52.271 [2024-11-28 16:28:43.914120] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.271 [2024-11-28 16:28:43.914229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.271 16:28:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 97786 00:15:52.271 [2024-11-28 16:28:43.914302] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.271 [2024-11-28 16:28:43.914364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:52.271 [2024-11-28 16:28:43.938380] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:52.532 16:28:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:15:52.532 00:15:52.532 real 0m4.891s 00:15:52.532 user 0m7.920s 00:15:52.532 sys 0m1.086s 00:15:52.532 16:28:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:52.532 ************************************ 00:15:52.532 END TEST raid_superblock_test_md_separate 00:15:52.532 ************************************ 00:15:52.532 16:28:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.532 16:28:44 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:15:52.532 16:28:44 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:15:52.532 16:28:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:52.532 16:28:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:52.532 16:28:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:52.532 ************************************ 00:15:52.532 START TEST raid_rebuild_test_sb_md_separate 00:15:52.532 ************************************ 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=98098 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 98098 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 98098 ']' 00:15:52.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:52.532 16:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.792 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:52.792 Zero copy mechanism will not be used. 00:15:52.792 [2024-11-28 16:28:44.380587] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:52.792 [2024-11-28 16:28:44.380750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98098 ] 00:15:52.792 [2024-11-28 16:28:44.541994] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.052 [2024-11-28 16:28:44.587239] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.052 [2024-11-28 16:28:44.629855] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.052 [2024-11-28 16:28:44.629890] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.622 BaseBdev1_malloc 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.622 [2024-11-28 16:28:45.204582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:53.622 [2024-11-28 16:28:45.204643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.622 [2024-11-28 16:28:45.204679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:53.622 [2024-11-28 16:28:45.204690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.622 [2024-11-28 16:28:45.206611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.622 [2024-11-28 16:28:45.206703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:53.622 BaseBdev1 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.622 BaseBdev2_malloc 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.622 [2024-11-28 16:28:45.250649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:53.622 [2024-11-28 16:28:45.250872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.622 [2024-11-28 16:28:45.250930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:53.622 [2024-11-28 16:28:45.250951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.622 [2024-11-28 16:28:45.255215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.622 [2024-11-28 16:28:45.255289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:53.622 BaseBdev2 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.622 spare_malloc 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.622 spare_delay 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.622 [2024-11-28 16:28:45.294123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:53.622 [2024-11-28 16:28:45.294196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.622 [2024-11-28 16:28:45.294217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:53.622 [2024-11-28 16:28:45.294227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.622 [2024-11-28 16:28:45.296076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.622 [2024-11-28 16:28:45.296115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:53.622 spare 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.622 [2024-11-28 16:28:45.306128] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:53.622 [2024-11-28 16:28:45.307770] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:53.622 [2024-11-28 16:28:45.307944] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:53.622 [2024-11-28 16:28:45.307958] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:53.622 [2024-11-28 16:28:45.308039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:53.622 [2024-11-28 16:28:45.308126] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:53.622 [2024-11-28 16:28:45.308135] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:53.622 [2024-11-28 16:28:45.308208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.622 "name": "raid_bdev1", 00:15:53.622 "uuid": "fb35c728-aba9-4031-828f-f3c50ec09c7e", 00:15:53.622 "strip_size_kb": 0, 00:15:53.622 "state": "online", 00:15:53.622 "raid_level": "raid1", 00:15:53.622 "superblock": true, 00:15:53.622 "num_base_bdevs": 2, 00:15:53.622 "num_base_bdevs_discovered": 2, 00:15:53.622 "num_base_bdevs_operational": 2, 00:15:53.622 "base_bdevs_list": [ 00:15:53.622 { 00:15:53.622 "name": "BaseBdev1", 00:15:53.622 "uuid": "a8b1d325-9e30-54be-ac3b-cc38ccdda6e9", 00:15:53.622 "is_configured": true, 00:15:53.622 "data_offset": 256, 00:15:53.622 "data_size": 7936 00:15:53.622 }, 00:15:53.622 { 00:15:53.622 "name": "BaseBdev2", 00:15:53.622 "uuid": "4bb7856e-5db5-528d-9b0f-93e5173c9a20", 00:15:53.622 "is_configured": true, 00:15:53.622 "data_offset": 256, 00:15:53.622 "data_size": 7936 00:15:53.622 } 00:15:53.622 ] 00:15:53.622 }' 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.622 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.191 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:54.191 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:54.191 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.191 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.191 [2024-11-28 16:28:45.753638] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:54.191 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.191 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:54.191 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.191 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:54.191 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.191 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.191 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.191 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:54.191 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:54.191 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:54.191 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:54.191 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:54.191 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:54.191 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:54.191 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:54.191 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:54.191 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:54.191 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:15:54.191 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:54.192 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:54.192 16:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:54.452 [2024-11-28 16:28:46.012975] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:54.452 /dev/nbd0 00:15:54.452 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:54.452 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:54.452 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:54.452 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:15:54.452 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:54.452 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:54.452 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:54.452 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:15:54.452 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:54.452 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:54.452 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:54.452 1+0 records in 00:15:54.452 1+0 records out 00:15:54.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301345 s, 13.6 MB/s 00:15:54.452 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.452 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:15:54.452 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.452 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:54.452 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:15:54.452 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:54.452 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:54.452 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:54.452 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:54.452 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:55.022 7936+0 records in 00:15:55.022 7936+0 records out 00:15:55.022 32505856 bytes (33 MB, 31 MiB) copied, 0.565806 s, 57.5 MB/s 00:15:55.022 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:55.022 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:55.022 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:55.022 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:55.022 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:15:55.022 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:55.022 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:55.283 [2024-11-28 16:28:46.847340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.283 [2024-11-28 16:28:46.879359] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.283 "name": "raid_bdev1", 00:15:55.283 "uuid": "fb35c728-aba9-4031-828f-f3c50ec09c7e", 00:15:55.283 "strip_size_kb": 0, 00:15:55.283 "state": "online", 00:15:55.283 "raid_level": "raid1", 00:15:55.283 "superblock": true, 00:15:55.283 "num_base_bdevs": 2, 00:15:55.283 "num_base_bdevs_discovered": 1, 00:15:55.283 "num_base_bdevs_operational": 1, 00:15:55.283 "base_bdevs_list": [ 00:15:55.283 { 00:15:55.283 "name": null, 00:15:55.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.283 "is_configured": false, 00:15:55.283 "data_offset": 0, 00:15:55.283 "data_size": 7936 00:15:55.283 }, 00:15:55.283 { 00:15:55.283 "name": "BaseBdev2", 00:15:55.283 "uuid": "4bb7856e-5db5-528d-9b0f-93e5173c9a20", 00:15:55.283 "is_configured": true, 00:15:55.283 "data_offset": 256, 00:15:55.283 "data_size": 7936 00:15:55.283 } 00:15:55.283 ] 00:15:55.283 }' 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.283 16:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.852 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:55.852 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.852 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.852 [2024-11-28 16:28:47.346552] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:55.852 [2024-11-28 16:28:47.349564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:15:55.852 [2024-11-28 16:28:47.351569] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:55.852 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.852 16:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:56.792 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:56.792 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.792 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:56.792 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:56.792 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.792 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.792 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.792 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.792 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.792 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.792 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.792 "name": "raid_bdev1", 00:15:56.792 "uuid": "fb35c728-aba9-4031-828f-f3c50ec09c7e", 00:15:56.792 "strip_size_kb": 0, 00:15:56.792 "state": "online", 00:15:56.792 "raid_level": "raid1", 00:15:56.792 "superblock": true, 00:15:56.792 "num_base_bdevs": 2, 00:15:56.792 "num_base_bdevs_discovered": 2, 00:15:56.792 "num_base_bdevs_operational": 2, 00:15:56.792 "process": { 00:15:56.792 "type": "rebuild", 00:15:56.792 "target": "spare", 00:15:56.792 "progress": { 00:15:56.792 "blocks": 2560, 00:15:56.792 "percent": 32 00:15:56.792 } 00:15:56.792 }, 00:15:56.792 "base_bdevs_list": [ 00:15:56.792 { 00:15:56.792 "name": "spare", 00:15:56.792 "uuid": "9c6b5e34-43c6-585a-891a-eaba8b53d155", 00:15:56.792 "is_configured": true, 00:15:56.792 "data_offset": 256, 00:15:56.792 "data_size": 7936 00:15:56.792 }, 00:15:56.792 { 00:15:56.792 "name": "BaseBdev2", 00:15:56.792 "uuid": "4bb7856e-5db5-528d-9b0f-93e5173c9a20", 00:15:56.792 "is_configured": true, 00:15:56.792 "data_offset": 256, 00:15:56.792 "data_size": 7936 00:15:56.792 } 00:15:56.792 ] 00:15:56.792 }' 00:15:56.792 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.792 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.792 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.792 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.792 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:56.792 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.792 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.792 [2024-11-28 16:28:48.502684] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:56.792 [2024-11-28 16:28:48.559114] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:56.792 [2024-11-28 16:28:48.559223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.792 [2024-11-28 16:28:48.559244] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:56.792 [2024-11-28 16:28:48.559252] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:57.052 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.052 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:57.052 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.052 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.052 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.052 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.052 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:57.052 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.053 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.053 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.053 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.053 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.053 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.053 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.053 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.053 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.053 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.053 "name": "raid_bdev1", 00:15:57.053 "uuid": "fb35c728-aba9-4031-828f-f3c50ec09c7e", 00:15:57.053 "strip_size_kb": 0, 00:15:57.053 "state": "online", 00:15:57.053 "raid_level": "raid1", 00:15:57.053 "superblock": true, 00:15:57.053 "num_base_bdevs": 2, 00:15:57.053 "num_base_bdevs_discovered": 1, 00:15:57.053 "num_base_bdevs_operational": 1, 00:15:57.053 "base_bdevs_list": [ 00:15:57.053 { 00:15:57.053 "name": null, 00:15:57.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.053 "is_configured": false, 00:15:57.053 "data_offset": 0, 00:15:57.053 "data_size": 7936 00:15:57.053 }, 00:15:57.053 { 00:15:57.053 "name": "BaseBdev2", 00:15:57.053 "uuid": "4bb7856e-5db5-528d-9b0f-93e5173c9a20", 00:15:57.053 "is_configured": true, 00:15:57.053 "data_offset": 256, 00:15:57.053 "data_size": 7936 00:15:57.053 } 00:15:57.053 ] 00:15:57.053 }' 00:15:57.053 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.053 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.313 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:57.313 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.313 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:57.313 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:57.313 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.313 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.313 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.313 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.313 16:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.313 16:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.313 16:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.313 "name": "raid_bdev1", 00:15:57.313 "uuid": "fb35c728-aba9-4031-828f-f3c50ec09c7e", 00:15:57.313 "strip_size_kb": 0, 00:15:57.313 "state": "online", 00:15:57.313 "raid_level": "raid1", 00:15:57.313 "superblock": true, 00:15:57.313 "num_base_bdevs": 2, 00:15:57.313 "num_base_bdevs_discovered": 1, 00:15:57.313 "num_base_bdevs_operational": 1, 00:15:57.313 "base_bdevs_list": [ 00:15:57.313 { 00:15:57.313 "name": null, 00:15:57.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.313 "is_configured": false, 00:15:57.313 "data_offset": 0, 00:15:57.313 "data_size": 7936 00:15:57.313 }, 00:15:57.313 { 00:15:57.313 "name": "BaseBdev2", 00:15:57.313 "uuid": "4bb7856e-5db5-528d-9b0f-93e5173c9a20", 00:15:57.313 "is_configured": true, 00:15:57.313 "data_offset": 256, 00:15:57.313 "data_size": 7936 00:15:57.313 } 00:15:57.313 ] 00:15:57.313 }' 00:15:57.313 16:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.313 16:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:57.313 16:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.573 16:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:57.573 16:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:57.573 16:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.573 16:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.573 [2024-11-28 16:28:49.109649] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:57.573 [2024-11-28 16:28:49.111384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:15:57.573 [2024-11-28 16:28:49.113223] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:57.573 16:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.573 16:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:58.512 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.512 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.512 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.512 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.512 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.512 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.512 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.512 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.512 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.512 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.512 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.512 "name": "raid_bdev1", 00:15:58.512 "uuid": "fb35c728-aba9-4031-828f-f3c50ec09c7e", 00:15:58.512 "strip_size_kb": 0, 00:15:58.512 "state": "online", 00:15:58.512 "raid_level": "raid1", 00:15:58.512 "superblock": true, 00:15:58.513 "num_base_bdevs": 2, 00:15:58.513 "num_base_bdevs_discovered": 2, 00:15:58.513 "num_base_bdevs_operational": 2, 00:15:58.513 "process": { 00:15:58.513 "type": "rebuild", 00:15:58.513 "target": "spare", 00:15:58.513 "progress": { 00:15:58.513 "blocks": 2560, 00:15:58.513 "percent": 32 00:15:58.513 } 00:15:58.513 }, 00:15:58.513 "base_bdevs_list": [ 00:15:58.513 { 00:15:58.513 "name": "spare", 00:15:58.513 "uuid": "9c6b5e34-43c6-585a-891a-eaba8b53d155", 00:15:58.513 "is_configured": true, 00:15:58.513 "data_offset": 256, 00:15:58.513 "data_size": 7936 00:15:58.513 }, 00:15:58.513 { 00:15:58.513 "name": "BaseBdev2", 00:15:58.513 "uuid": "4bb7856e-5db5-528d-9b0f-93e5173c9a20", 00:15:58.513 "is_configured": true, 00:15:58.513 "data_offset": 256, 00:15:58.513 "data_size": 7936 00:15:58.513 } 00:15:58.513 ] 00:15:58.513 }' 00:15:58.513 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.513 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.513 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.513 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.513 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:58.513 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:58.513 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:58.513 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:58.513 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:58.513 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:58.513 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=589 00:15:58.513 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:58.513 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.513 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.513 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.513 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.513 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.513 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.513 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.513 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.513 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.772 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.772 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.772 "name": "raid_bdev1", 00:15:58.772 "uuid": "fb35c728-aba9-4031-828f-f3c50ec09c7e", 00:15:58.772 "strip_size_kb": 0, 00:15:58.772 "state": "online", 00:15:58.772 "raid_level": "raid1", 00:15:58.772 "superblock": true, 00:15:58.772 "num_base_bdevs": 2, 00:15:58.772 "num_base_bdevs_discovered": 2, 00:15:58.772 "num_base_bdevs_operational": 2, 00:15:58.772 "process": { 00:15:58.772 "type": "rebuild", 00:15:58.772 "target": "spare", 00:15:58.772 "progress": { 00:15:58.772 "blocks": 2816, 00:15:58.772 "percent": 35 00:15:58.772 } 00:15:58.772 }, 00:15:58.772 "base_bdevs_list": [ 00:15:58.772 { 00:15:58.772 "name": "spare", 00:15:58.772 "uuid": "9c6b5e34-43c6-585a-891a-eaba8b53d155", 00:15:58.772 "is_configured": true, 00:15:58.772 "data_offset": 256, 00:15:58.772 "data_size": 7936 00:15:58.772 }, 00:15:58.772 { 00:15:58.772 "name": "BaseBdev2", 00:15:58.772 "uuid": "4bb7856e-5db5-528d-9b0f-93e5173c9a20", 00:15:58.772 "is_configured": true, 00:15:58.772 "data_offset": 256, 00:15:58.772 "data_size": 7936 00:15:58.772 } 00:15:58.772 ] 00:15:58.772 }' 00:15:58.772 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.772 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.772 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.772 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.772 16:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:59.712 16:28:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:59.712 16:28:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.712 16:28:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.712 16:28:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.712 16:28:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.712 16:28:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.712 16:28:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.712 16:28:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.712 16:28:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.712 16:28:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.712 16:28:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.712 16:28:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.712 "name": "raid_bdev1", 00:15:59.712 "uuid": "fb35c728-aba9-4031-828f-f3c50ec09c7e", 00:15:59.712 "strip_size_kb": 0, 00:15:59.712 "state": "online", 00:15:59.712 "raid_level": "raid1", 00:15:59.712 "superblock": true, 00:15:59.712 "num_base_bdevs": 2, 00:15:59.713 "num_base_bdevs_discovered": 2, 00:15:59.713 "num_base_bdevs_operational": 2, 00:15:59.713 "process": { 00:15:59.713 "type": "rebuild", 00:15:59.713 "target": "spare", 00:15:59.713 "progress": { 00:15:59.713 "blocks": 5632, 00:15:59.713 "percent": 70 00:15:59.713 } 00:15:59.713 }, 00:15:59.713 "base_bdevs_list": [ 00:15:59.713 { 00:15:59.713 "name": "spare", 00:15:59.713 "uuid": "9c6b5e34-43c6-585a-891a-eaba8b53d155", 00:15:59.713 "is_configured": true, 00:15:59.713 "data_offset": 256, 00:15:59.713 "data_size": 7936 00:15:59.713 }, 00:15:59.713 { 00:15:59.713 "name": "BaseBdev2", 00:15:59.713 "uuid": "4bb7856e-5db5-528d-9b0f-93e5173c9a20", 00:15:59.713 "is_configured": true, 00:15:59.713 "data_offset": 256, 00:15:59.713 "data_size": 7936 00:15:59.713 } 00:15:59.713 ] 00:15:59.713 }' 00:15:59.713 16:28:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.973 16:28:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.973 16:28:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.973 16:28:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.973 16:28:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:00.543 [2024-11-28 16:28:52.224274] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:00.543 [2024-11-28 16:28:52.224406] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:00.543 [2024-11-28 16:28:52.224538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.803 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.803 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.803 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.803 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.803 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.803 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.803 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.803 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.803 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.803 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.063 "name": "raid_bdev1", 00:16:01.063 "uuid": "fb35c728-aba9-4031-828f-f3c50ec09c7e", 00:16:01.063 "strip_size_kb": 0, 00:16:01.063 "state": "online", 00:16:01.063 "raid_level": "raid1", 00:16:01.063 "superblock": true, 00:16:01.063 "num_base_bdevs": 2, 00:16:01.063 "num_base_bdevs_discovered": 2, 00:16:01.063 "num_base_bdevs_operational": 2, 00:16:01.063 "base_bdevs_list": [ 00:16:01.063 { 00:16:01.063 "name": "spare", 00:16:01.063 "uuid": "9c6b5e34-43c6-585a-891a-eaba8b53d155", 00:16:01.063 "is_configured": true, 00:16:01.063 "data_offset": 256, 00:16:01.063 "data_size": 7936 00:16:01.063 }, 00:16:01.063 { 00:16:01.063 "name": "BaseBdev2", 00:16:01.063 "uuid": "4bb7856e-5db5-528d-9b0f-93e5173c9a20", 00:16:01.063 "is_configured": true, 00:16:01.063 "data_offset": 256, 00:16:01.063 "data_size": 7936 00:16:01.063 } 00:16:01.063 ] 00:16:01.063 }' 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.063 "name": "raid_bdev1", 00:16:01.063 "uuid": "fb35c728-aba9-4031-828f-f3c50ec09c7e", 00:16:01.063 "strip_size_kb": 0, 00:16:01.063 "state": "online", 00:16:01.063 "raid_level": "raid1", 00:16:01.063 "superblock": true, 00:16:01.063 "num_base_bdevs": 2, 00:16:01.063 "num_base_bdevs_discovered": 2, 00:16:01.063 "num_base_bdevs_operational": 2, 00:16:01.063 "base_bdevs_list": [ 00:16:01.063 { 00:16:01.063 "name": "spare", 00:16:01.063 "uuid": "9c6b5e34-43c6-585a-891a-eaba8b53d155", 00:16:01.063 "is_configured": true, 00:16:01.063 "data_offset": 256, 00:16:01.063 "data_size": 7936 00:16:01.063 }, 00:16:01.063 { 00:16:01.063 "name": "BaseBdev2", 00:16:01.063 "uuid": "4bb7856e-5db5-528d-9b0f-93e5173c9a20", 00:16:01.063 "is_configured": true, 00:16:01.063 "data_offset": 256, 00:16:01.063 "data_size": 7936 00:16:01.063 } 00:16:01.063 ] 00:16:01.063 }' 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.063 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.323 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.323 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.323 "name": "raid_bdev1", 00:16:01.323 "uuid": "fb35c728-aba9-4031-828f-f3c50ec09c7e", 00:16:01.323 "strip_size_kb": 0, 00:16:01.323 "state": "online", 00:16:01.323 "raid_level": "raid1", 00:16:01.323 "superblock": true, 00:16:01.323 "num_base_bdevs": 2, 00:16:01.323 "num_base_bdevs_discovered": 2, 00:16:01.323 "num_base_bdevs_operational": 2, 00:16:01.323 "base_bdevs_list": [ 00:16:01.323 { 00:16:01.323 "name": "spare", 00:16:01.323 "uuid": "9c6b5e34-43c6-585a-891a-eaba8b53d155", 00:16:01.323 "is_configured": true, 00:16:01.323 "data_offset": 256, 00:16:01.323 "data_size": 7936 00:16:01.323 }, 00:16:01.323 { 00:16:01.323 "name": "BaseBdev2", 00:16:01.323 "uuid": "4bb7856e-5db5-528d-9b0f-93e5173c9a20", 00:16:01.323 "is_configured": true, 00:16:01.323 "data_offset": 256, 00:16:01.323 "data_size": 7936 00:16:01.323 } 00:16:01.323 ] 00:16:01.323 }' 00:16:01.323 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.323 16:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.583 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:01.583 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.583 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.583 [2024-11-28 16:28:53.249331] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.583 [2024-11-28 16:28:53.249399] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.583 [2024-11-28 16:28:53.249496] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.583 [2024-11-28 16:28:53.249599] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.583 [2024-11-28 16:28:53.249656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:01.583 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.583 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.583 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:16:01.583 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.583 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.583 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.583 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:01.583 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:01.583 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:01.583 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:01.583 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:01.583 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:01.583 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:01.583 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:01.583 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:01.583 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:01.583 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:01.583 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:01.583 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:01.843 /dev/nbd0 00:16:01.843 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:01.843 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:01.843 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:01.843 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:01.843 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:01.843 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:01.843 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:01.843 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:01.843 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:01.843 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:01.843 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.843 1+0 records in 00:16:01.843 1+0 records out 00:16:01.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033504 s, 12.2 MB/s 00:16:01.843 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.843 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:01.843 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.843 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:01.843 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:01.843 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:01.843 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:01.843 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:02.103 /dev/nbd1 00:16:02.103 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:02.103 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:02.103 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:02.103 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:02.103 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:02.103 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:02.103 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:02.103 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:02.103 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:02.103 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:02.103 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.103 1+0 records in 00:16:02.103 1+0 records out 00:16:02.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299338 s, 13.7 MB/s 00:16:02.103 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.103 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:02.103 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.103 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:02.103 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:02.103 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.103 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:02.103 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:02.104 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:02.104 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:02.104 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:02.104 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:02.104 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:02.104 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.104 16:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:02.363 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:02.363 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:02.363 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:02.363 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.363 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.363 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:02.363 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:02.363 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.363 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.363 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.623 [2024-11-28 16:28:54.228186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:02.623 [2024-11-28 16:28:54.228241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.623 [2024-11-28 16:28:54.228263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:02.623 [2024-11-28 16:28:54.228276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.623 [2024-11-28 16:28:54.230198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.623 [2024-11-28 16:28:54.230242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:02.623 [2024-11-28 16:28:54.230300] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:02.623 [2024-11-28 16:28:54.230346] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:02.623 [2024-11-28 16:28:54.230455] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:02.623 spare 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.623 [2024-11-28 16:28:54.330339] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:02.623 [2024-11-28 16:28:54.330403] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:02.623 [2024-11-28 16:28:54.330501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:16:02.623 [2024-11-28 16:28:54.330613] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:02.623 [2024-11-28 16:28:54.330624] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:02.623 [2024-11-28 16:28:54.330710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.623 "name": "raid_bdev1", 00:16:02.623 "uuid": "fb35c728-aba9-4031-828f-f3c50ec09c7e", 00:16:02.623 "strip_size_kb": 0, 00:16:02.623 "state": "online", 00:16:02.623 "raid_level": "raid1", 00:16:02.623 "superblock": true, 00:16:02.623 "num_base_bdevs": 2, 00:16:02.623 "num_base_bdevs_discovered": 2, 00:16:02.623 "num_base_bdevs_operational": 2, 00:16:02.623 "base_bdevs_list": [ 00:16:02.623 { 00:16:02.623 "name": "spare", 00:16:02.623 "uuid": "9c6b5e34-43c6-585a-891a-eaba8b53d155", 00:16:02.623 "is_configured": true, 00:16:02.623 "data_offset": 256, 00:16:02.623 "data_size": 7936 00:16:02.623 }, 00:16:02.623 { 00:16:02.623 "name": "BaseBdev2", 00:16:02.623 "uuid": "4bb7856e-5db5-528d-9b0f-93e5173c9a20", 00:16:02.623 "is_configured": true, 00:16:02.623 "data_offset": 256, 00:16:02.623 "data_size": 7936 00:16:02.623 } 00:16:02.623 ] 00:16:02.623 }' 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.623 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.193 "name": "raid_bdev1", 00:16:03.193 "uuid": "fb35c728-aba9-4031-828f-f3c50ec09c7e", 00:16:03.193 "strip_size_kb": 0, 00:16:03.193 "state": "online", 00:16:03.193 "raid_level": "raid1", 00:16:03.193 "superblock": true, 00:16:03.193 "num_base_bdevs": 2, 00:16:03.193 "num_base_bdevs_discovered": 2, 00:16:03.193 "num_base_bdevs_operational": 2, 00:16:03.193 "base_bdevs_list": [ 00:16:03.193 { 00:16:03.193 "name": "spare", 00:16:03.193 "uuid": "9c6b5e34-43c6-585a-891a-eaba8b53d155", 00:16:03.193 "is_configured": true, 00:16:03.193 "data_offset": 256, 00:16:03.193 "data_size": 7936 00:16:03.193 }, 00:16:03.193 { 00:16:03.193 "name": "BaseBdev2", 00:16:03.193 "uuid": "4bb7856e-5db5-528d-9b0f-93e5173c9a20", 00:16:03.193 "is_configured": true, 00:16:03.193 "data_offset": 256, 00:16:03.193 "data_size": 7936 00:16:03.193 } 00:16:03.193 ] 00:16:03.193 }' 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.193 [2024-11-28 16:28:54.954982] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.193 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.453 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.453 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.453 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.453 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.453 16:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.453 16:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.453 "name": "raid_bdev1", 00:16:03.453 "uuid": "fb35c728-aba9-4031-828f-f3c50ec09c7e", 00:16:03.453 "strip_size_kb": 0, 00:16:03.453 "state": "online", 00:16:03.453 "raid_level": "raid1", 00:16:03.453 "superblock": true, 00:16:03.453 "num_base_bdevs": 2, 00:16:03.453 "num_base_bdevs_discovered": 1, 00:16:03.453 "num_base_bdevs_operational": 1, 00:16:03.453 "base_bdevs_list": [ 00:16:03.453 { 00:16:03.453 "name": null, 00:16:03.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.453 "is_configured": false, 00:16:03.453 "data_offset": 0, 00:16:03.453 "data_size": 7936 00:16:03.453 }, 00:16:03.453 { 00:16:03.453 "name": "BaseBdev2", 00:16:03.453 "uuid": "4bb7856e-5db5-528d-9b0f-93e5173c9a20", 00:16:03.453 "is_configured": true, 00:16:03.453 "data_offset": 256, 00:16:03.453 "data_size": 7936 00:16:03.453 } 00:16:03.453 ] 00:16:03.453 }' 00:16:03.453 16:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.453 16:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.714 16:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:03.714 16:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.714 16:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.714 [2024-11-28 16:28:55.394218] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:03.714 [2024-11-28 16:28:55.394422] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:03.714 [2024-11-28 16:28:55.394506] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:03.714 [2024-11-28 16:28:55.394584] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:03.714 [2024-11-28 16:28:55.396234] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:16:03.714 [2024-11-28 16:28:55.398089] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:03.714 16:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.714 16:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:04.653 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.653 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.653 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.653 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.653 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.653 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.653 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.653 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.653 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.914 "name": "raid_bdev1", 00:16:04.914 "uuid": "fb35c728-aba9-4031-828f-f3c50ec09c7e", 00:16:04.914 "strip_size_kb": 0, 00:16:04.914 "state": "online", 00:16:04.914 "raid_level": "raid1", 00:16:04.914 "superblock": true, 00:16:04.914 "num_base_bdevs": 2, 00:16:04.914 "num_base_bdevs_discovered": 2, 00:16:04.914 "num_base_bdevs_operational": 2, 00:16:04.914 "process": { 00:16:04.914 "type": "rebuild", 00:16:04.914 "target": "spare", 00:16:04.914 "progress": { 00:16:04.914 "blocks": 2560, 00:16:04.914 "percent": 32 00:16:04.914 } 00:16:04.914 }, 00:16:04.914 "base_bdevs_list": [ 00:16:04.914 { 00:16:04.914 "name": "spare", 00:16:04.914 "uuid": "9c6b5e34-43c6-585a-891a-eaba8b53d155", 00:16:04.914 "is_configured": true, 00:16:04.914 "data_offset": 256, 00:16:04.914 "data_size": 7936 00:16:04.914 }, 00:16:04.914 { 00:16:04.914 "name": "BaseBdev2", 00:16:04.914 "uuid": "4bb7856e-5db5-528d-9b0f-93e5173c9a20", 00:16:04.914 "is_configured": true, 00:16:04.914 "data_offset": 256, 00:16:04.914 "data_size": 7936 00:16:04.914 } 00:16:04.914 ] 00:16:04.914 }' 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.914 [2024-11-28 16:28:56.552916] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:04.914 [2024-11-28 16:28:56.602177] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:04.914 [2024-11-28 16:28:56.602278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.914 [2024-11-28 16:28:56.602313] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:04.914 [2024-11-28 16:28:56.602332] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.914 "name": "raid_bdev1", 00:16:04.914 "uuid": "fb35c728-aba9-4031-828f-f3c50ec09c7e", 00:16:04.914 "strip_size_kb": 0, 00:16:04.914 "state": "online", 00:16:04.914 "raid_level": "raid1", 00:16:04.914 "superblock": true, 00:16:04.914 "num_base_bdevs": 2, 00:16:04.914 "num_base_bdevs_discovered": 1, 00:16:04.914 "num_base_bdevs_operational": 1, 00:16:04.914 "base_bdevs_list": [ 00:16:04.914 { 00:16:04.914 "name": null, 00:16:04.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.914 "is_configured": false, 00:16:04.914 "data_offset": 0, 00:16:04.914 "data_size": 7936 00:16:04.914 }, 00:16:04.914 { 00:16:04.914 "name": "BaseBdev2", 00:16:04.914 "uuid": "4bb7856e-5db5-528d-9b0f-93e5173c9a20", 00:16:04.914 "is_configured": true, 00:16:04.914 "data_offset": 256, 00:16:04.914 "data_size": 7936 00:16:04.914 } 00:16:04.914 ] 00:16:04.914 }' 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.914 16:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:05.483 16:28:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:05.483 16:28:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.483 16:28:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:05.483 [2024-11-28 16:28:57.012125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:05.483 [2024-11-28 16:28:57.012217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.483 [2024-11-28 16:28:57.012256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:05.483 [2024-11-28 16:28:57.012284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.483 [2024-11-28 16:28:57.012516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.483 [2024-11-28 16:28:57.012569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:05.484 [2024-11-28 16:28:57.012649] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:05.484 [2024-11-28 16:28:57.012686] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:05.484 [2024-11-28 16:28:57.012728] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:05.484 [2024-11-28 16:28:57.012817] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:05.484 [2024-11-28 16:28:57.014299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:16:05.484 [2024-11-28 16:28:57.016143] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:05.484 spare 00:16:05.484 16:28:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.484 16:28:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:06.423 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.423 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.423 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.423 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.423 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.423 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.423 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.423 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.423 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.423 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.423 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.423 "name": "raid_bdev1", 00:16:06.423 "uuid": "fb35c728-aba9-4031-828f-f3c50ec09c7e", 00:16:06.423 "strip_size_kb": 0, 00:16:06.423 "state": "online", 00:16:06.423 "raid_level": "raid1", 00:16:06.423 "superblock": true, 00:16:06.423 "num_base_bdevs": 2, 00:16:06.423 "num_base_bdevs_discovered": 2, 00:16:06.423 "num_base_bdevs_operational": 2, 00:16:06.423 "process": { 00:16:06.423 "type": "rebuild", 00:16:06.423 "target": "spare", 00:16:06.423 "progress": { 00:16:06.423 "blocks": 2560, 00:16:06.423 "percent": 32 00:16:06.423 } 00:16:06.423 }, 00:16:06.423 "base_bdevs_list": [ 00:16:06.423 { 00:16:06.423 "name": "spare", 00:16:06.423 "uuid": "9c6b5e34-43c6-585a-891a-eaba8b53d155", 00:16:06.423 "is_configured": true, 00:16:06.423 "data_offset": 256, 00:16:06.423 "data_size": 7936 00:16:06.423 }, 00:16:06.423 { 00:16:06.423 "name": "BaseBdev2", 00:16:06.423 "uuid": "4bb7856e-5db5-528d-9b0f-93e5173c9a20", 00:16:06.423 "is_configured": true, 00:16:06.423 "data_offset": 256, 00:16:06.423 "data_size": 7936 00:16:06.423 } 00:16:06.423 ] 00:16:06.423 }' 00:16:06.423 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.423 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.423 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.423 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.423 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:06.423 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.423 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.423 [2024-11-28 16:28:58.174920] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:06.683 [2024-11-28 16:28:58.220158] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:06.683 [2024-11-28 16:28:58.220280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.683 [2024-11-28 16:28:58.220314] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:06.683 [2024-11-28 16:28:58.220336] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:06.683 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.683 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:06.683 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.683 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.683 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.683 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.683 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:06.683 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.683 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.683 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.683 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.683 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.683 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.683 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.683 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.683 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.683 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.683 "name": "raid_bdev1", 00:16:06.683 "uuid": "fb35c728-aba9-4031-828f-f3c50ec09c7e", 00:16:06.684 "strip_size_kb": 0, 00:16:06.684 "state": "online", 00:16:06.684 "raid_level": "raid1", 00:16:06.684 "superblock": true, 00:16:06.684 "num_base_bdevs": 2, 00:16:06.684 "num_base_bdevs_discovered": 1, 00:16:06.684 "num_base_bdevs_operational": 1, 00:16:06.684 "base_bdevs_list": [ 00:16:06.684 { 00:16:06.684 "name": null, 00:16:06.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.684 "is_configured": false, 00:16:06.684 "data_offset": 0, 00:16:06.684 "data_size": 7936 00:16:06.684 }, 00:16:06.684 { 00:16:06.684 "name": "BaseBdev2", 00:16:06.684 "uuid": "4bb7856e-5db5-528d-9b0f-93e5173c9a20", 00:16:06.684 "is_configured": true, 00:16:06.684 "data_offset": 256, 00:16:06.684 "data_size": 7936 00:16:06.684 } 00:16:06.684 ] 00:16:06.684 }' 00:16:06.684 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.684 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.944 "name": "raid_bdev1", 00:16:06.944 "uuid": "fb35c728-aba9-4031-828f-f3c50ec09c7e", 00:16:06.944 "strip_size_kb": 0, 00:16:06.944 "state": "online", 00:16:06.944 "raid_level": "raid1", 00:16:06.944 "superblock": true, 00:16:06.944 "num_base_bdevs": 2, 00:16:06.944 "num_base_bdevs_discovered": 1, 00:16:06.944 "num_base_bdevs_operational": 1, 00:16:06.944 "base_bdevs_list": [ 00:16:06.944 { 00:16:06.944 "name": null, 00:16:06.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.944 "is_configured": false, 00:16:06.944 "data_offset": 0, 00:16:06.944 "data_size": 7936 00:16:06.944 }, 00:16:06.944 { 00:16:06.944 "name": "BaseBdev2", 00:16:06.944 "uuid": "4bb7856e-5db5-528d-9b0f-93e5173c9a20", 00:16:06.944 "is_configured": true, 00:16:06.944 "data_offset": 256, 00:16:06.944 "data_size": 7936 00:16:06.944 } 00:16:06.944 ] 00:16:06.944 }' 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.944 [2024-11-28 16:28:58.706068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:06.944 [2024-11-28 16:28:58.706117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.944 [2024-11-28 16:28:58.706137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:06.944 [2024-11-28 16:28:58.706147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.944 [2024-11-28 16:28:58.706320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.944 [2024-11-28 16:28:58.706337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:06.944 [2024-11-28 16:28:58.706379] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:06.944 [2024-11-28 16:28:58.706398] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:06.944 [2024-11-28 16:28:58.706405] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:06.944 [2024-11-28 16:28:58.706415] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:06.944 BaseBdev1 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.944 16:28:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:08.326 16:28:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:08.326 16:28:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.326 16:28:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.326 16:28:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.326 16:28:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.326 16:28:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:08.326 16:28:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.326 16:28:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.326 16:28:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.326 16:28:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.326 16:28:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.326 16:28:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.326 16:28:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.326 16:28:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.326 16:28:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.326 16:28:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.326 "name": "raid_bdev1", 00:16:08.326 "uuid": "fb35c728-aba9-4031-828f-f3c50ec09c7e", 00:16:08.326 "strip_size_kb": 0, 00:16:08.326 "state": "online", 00:16:08.326 "raid_level": "raid1", 00:16:08.326 "superblock": true, 00:16:08.326 "num_base_bdevs": 2, 00:16:08.326 "num_base_bdevs_discovered": 1, 00:16:08.326 "num_base_bdevs_operational": 1, 00:16:08.326 "base_bdevs_list": [ 00:16:08.326 { 00:16:08.326 "name": null, 00:16:08.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.326 "is_configured": false, 00:16:08.326 "data_offset": 0, 00:16:08.326 "data_size": 7936 00:16:08.326 }, 00:16:08.326 { 00:16:08.326 "name": "BaseBdev2", 00:16:08.326 "uuid": "4bb7856e-5db5-528d-9b0f-93e5173c9a20", 00:16:08.326 "is_configured": true, 00:16:08.326 "data_offset": 256, 00:16:08.326 "data_size": 7936 00:16:08.326 } 00:16:08.326 ] 00:16:08.326 }' 00:16:08.326 16:28:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.326 16:28:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.586 "name": "raid_bdev1", 00:16:08.586 "uuid": "fb35c728-aba9-4031-828f-f3c50ec09c7e", 00:16:08.586 "strip_size_kb": 0, 00:16:08.586 "state": "online", 00:16:08.586 "raid_level": "raid1", 00:16:08.586 "superblock": true, 00:16:08.586 "num_base_bdevs": 2, 00:16:08.586 "num_base_bdevs_discovered": 1, 00:16:08.586 "num_base_bdevs_operational": 1, 00:16:08.586 "base_bdevs_list": [ 00:16:08.586 { 00:16:08.586 "name": null, 00:16:08.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.586 "is_configured": false, 00:16:08.586 "data_offset": 0, 00:16:08.586 "data_size": 7936 00:16:08.586 }, 00:16:08.586 { 00:16:08.586 "name": "BaseBdev2", 00:16:08.586 "uuid": "4bb7856e-5db5-528d-9b0f-93e5173c9a20", 00:16:08.586 "is_configured": true, 00:16:08.586 "data_offset": 256, 00:16:08.586 "data_size": 7936 00:16:08.586 } 00:16:08.586 ] 00:16:08.586 }' 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.586 [2024-11-28 16:29:00.347281] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:08.586 [2024-11-28 16:29:00.347425] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:08.586 [2024-11-28 16:29:00.347438] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:08.586 request: 00:16:08.586 { 00:16:08.586 "base_bdev": "BaseBdev1", 00:16:08.586 "raid_bdev": "raid_bdev1", 00:16:08.586 "method": "bdev_raid_add_base_bdev", 00:16:08.586 "req_id": 1 00:16:08.586 } 00:16:08.586 Got JSON-RPC error response 00:16:08.586 response: 00:16:08.586 { 00:16:08.586 "code": -22, 00:16:08.586 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:08.586 } 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:08.586 16:29:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:09.967 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:09.967 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.967 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.967 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.967 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.967 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:09.967 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.967 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.967 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.967 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.967 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.967 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.967 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.967 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:09.967 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.967 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.967 "name": "raid_bdev1", 00:16:09.967 "uuid": "fb35c728-aba9-4031-828f-f3c50ec09c7e", 00:16:09.967 "strip_size_kb": 0, 00:16:09.967 "state": "online", 00:16:09.967 "raid_level": "raid1", 00:16:09.967 "superblock": true, 00:16:09.967 "num_base_bdevs": 2, 00:16:09.967 "num_base_bdevs_discovered": 1, 00:16:09.967 "num_base_bdevs_operational": 1, 00:16:09.967 "base_bdevs_list": [ 00:16:09.967 { 00:16:09.967 "name": null, 00:16:09.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.967 "is_configured": false, 00:16:09.967 "data_offset": 0, 00:16:09.967 "data_size": 7936 00:16:09.967 }, 00:16:09.967 { 00:16:09.967 "name": "BaseBdev2", 00:16:09.967 "uuid": "4bb7856e-5db5-528d-9b0f-93e5173c9a20", 00:16:09.967 "is_configured": true, 00:16:09.968 "data_offset": 256, 00:16:09.968 "data_size": 7936 00:16:09.968 } 00:16:09.968 ] 00:16:09.968 }' 00:16:09.968 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.968 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.228 "name": "raid_bdev1", 00:16:10.228 "uuid": "fb35c728-aba9-4031-828f-f3c50ec09c7e", 00:16:10.228 "strip_size_kb": 0, 00:16:10.228 "state": "online", 00:16:10.228 "raid_level": "raid1", 00:16:10.228 "superblock": true, 00:16:10.228 "num_base_bdevs": 2, 00:16:10.228 "num_base_bdevs_discovered": 1, 00:16:10.228 "num_base_bdevs_operational": 1, 00:16:10.228 "base_bdevs_list": [ 00:16:10.228 { 00:16:10.228 "name": null, 00:16:10.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.228 "is_configured": false, 00:16:10.228 "data_offset": 0, 00:16:10.228 "data_size": 7936 00:16:10.228 }, 00:16:10.228 { 00:16:10.228 "name": "BaseBdev2", 00:16:10.228 "uuid": "4bb7856e-5db5-528d-9b0f-93e5173c9a20", 00:16:10.228 "is_configured": true, 00:16:10.228 "data_offset": 256, 00:16:10.228 "data_size": 7936 00:16:10.228 } 00:16:10.228 ] 00:16:10.228 }' 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 98098 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 98098 ']' 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 98098 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98098 00:16:10.228 killing process with pid 98098 00:16:10.228 Received shutdown signal, test time was about 60.000000 seconds 00:16:10.228 00:16:10.228 Latency(us) 00:16:10.228 [2024-11-28T16:29:01.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.228 [2024-11-28T16:29:01.999Z] =================================================================================================================== 00:16:10.228 [2024-11-28T16:29:01.999Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98098' 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 98098 00:16:10.228 16:29:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 98098 00:16:10.228 [2024-11-28 16:29:01.955979] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:10.228 [2024-11-28 16:29:01.956109] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:10.228 [2024-11-28 16:29:01.956161] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:10.228 [2024-11-28 16:29:01.956170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:10.228 [2024-11-28 16:29:01.988870] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:10.488 16:29:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:16:10.488 00:16:10.488 real 0m17.942s 00:16:10.488 user 0m23.697s 00:16:10.488 sys 0m2.506s 00:16:10.488 ************************************ 00:16:10.488 END TEST raid_rebuild_test_sb_md_separate 00:16:10.488 ************************************ 00:16:10.488 16:29:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:10.488 16:29:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:10.748 16:29:02 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:16:10.748 16:29:02 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:16:10.748 16:29:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:10.748 16:29:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:10.748 16:29:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:10.748 ************************************ 00:16:10.748 START TEST raid_state_function_test_sb_md_interleaved 00:16:10.748 ************************************ 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=98768 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:10.748 Process raid pid: 98768 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98768' 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 98768 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 98768 ']' 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:10.748 16:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.748 [2024-11-28 16:29:02.399730] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:10.749 [2024-11-28 16:29:02.399959] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.009 [2024-11-28 16:29:02.559482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.009 [2024-11-28 16:29:02.605248] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.009 [2024-11-28 16:29:02.647420] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:11.009 [2024-11-28 16:29:02.647532] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:11.578 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:11.578 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:11.578 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:11.578 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.579 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:11.579 [2024-11-28 16:29:03.216730] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:11.579 [2024-11-28 16:29:03.216820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:11.579 [2024-11-28 16:29:03.216874] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:11.579 [2024-11-28 16:29:03.216899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:11.579 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.579 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:11.579 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.579 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.579 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.579 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.579 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:11.579 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.579 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.579 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.579 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.579 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.579 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.579 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.579 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:11.579 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.579 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.579 "name": "Existed_Raid", 00:16:11.579 "uuid": "b296d518-78a8-46f4-b5e2-e434d14e295c", 00:16:11.579 "strip_size_kb": 0, 00:16:11.579 "state": "configuring", 00:16:11.579 "raid_level": "raid1", 00:16:11.579 "superblock": true, 00:16:11.579 "num_base_bdevs": 2, 00:16:11.579 "num_base_bdevs_discovered": 0, 00:16:11.579 "num_base_bdevs_operational": 2, 00:16:11.579 "base_bdevs_list": [ 00:16:11.579 { 00:16:11.579 "name": "BaseBdev1", 00:16:11.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.579 "is_configured": false, 00:16:11.579 "data_offset": 0, 00:16:11.579 "data_size": 0 00:16:11.579 }, 00:16:11.579 { 00:16:11.579 "name": "BaseBdev2", 00:16:11.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.579 "is_configured": false, 00:16:11.579 "data_offset": 0, 00:16:11.579 "data_size": 0 00:16:11.579 } 00:16:11.579 ] 00:16:11.579 }' 00:16:11.579 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.579 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.154 [2024-11-28 16:29:03.631917] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:12.154 [2024-11-28 16:29:03.631962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.154 [2024-11-28 16:29:03.639946] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:12.154 [2024-11-28 16:29:03.639992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:12.154 [2024-11-28 16:29:03.640001] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:12.154 [2024-11-28 16:29:03.640010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.154 [2024-11-28 16:29:03.657003] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:12.154 BaseBdev1 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.154 [ 00:16:12.154 { 00:16:12.154 "name": "BaseBdev1", 00:16:12.154 "aliases": [ 00:16:12.154 "52fd8a18-f623-4049-9393-1e4a52c9105b" 00:16:12.154 ], 00:16:12.154 "product_name": "Malloc disk", 00:16:12.154 "block_size": 4128, 00:16:12.154 "num_blocks": 8192, 00:16:12.154 "uuid": "52fd8a18-f623-4049-9393-1e4a52c9105b", 00:16:12.154 "md_size": 32, 00:16:12.154 "md_interleave": true, 00:16:12.154 "dif_type": 0, 00:16:12.154 "assigned_rate_limits": { 00:16:12.154 "rw_ios_per_sec": 0, 00:16:12.154 "rw_mbytes_per_sec": 0, 00:16:12.154 "r_mbytes_per_sec": 0, 00:16:12.154 "w_mbytes_per_sec": 0 00:16:12.154 }, 00:16:12.154 "claimed": true, 00:16:12.154 "claim_type": "exclusive_write", 00:16:12.154 "zoned": false, 00:16:12.154 "supported_io_types": { 00:16:12.154 "read": true, 00:16:12.154 "write": true, 00:16:12.154 "unmap": true, 00:16:12.154 "flush": true, 00:16:12.154 "reset": true, 00:16:12.154 "nvme_admin": false, 00:16:12.154 "nvme_io": false, 00:16:12.154 "nvme_io_md": false, 00:16:12.154 "write_zeroes": true, 00:16:12.154 "zcopy": true, 00:16:12.154 "get_zone_info": false, 00:16:12.154 "zone_management": false, 00:16:12.154 "zone_append": false, 00:16:12.154 "compare": false, 00:16:12.154 "compare_and_write": false, 00:16:12.154 "abort": true, 00:16:12.154 "seek_hole": false, 00:16:12.154 "seek_data": false, 00:16:12.154 "copy": true, 00:16:12.154 "nvme_iov_md": false 00:16:12.154 }, 00:16:12.154 "memory_domains": [ 00:16:12.154 { 00:16:12.154 "dma_device_id": "system", 00:16:12.154 "dma_device_type": 1 00:16:12.154 }, 00:16:12.154 { 00:16:12.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.154 "dma_device_type": 2 00:16:12.154 } 00:16:12.154 ], 00:16:12.154 "driver_specific": {} 00:16:12.154 } 00:16:12.154 ] 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.154 "name": "Existed_Raid", 00:16:12.154 "uuid": "f0376c40-77d9-4f3a-8084-62b48cd53769", 00:16:12.154 "strip_size_kb": 0, 00:16:12.154 "state": "configuring", 00:16:12.154 "raid_level": "raid1", 00:16:12.154 "superblock": true, 00:16:12.154 "num_base_bdevs": 2, 00:16:12.154 "num_base_bdevs_discovered": 1, 00:16:12.154 "num_base_bdevs_operational": 2, 00:16:12.154 "base_bdevs_list": [ 00:16:12.154 { 00:16:12.154 "name": "BaseBdev1", 00:16:12.154 "uuid": "52fd8a18-f623-4049-9393-1e4a52c9105b", 00:16:12.154 "is_configured": true, 00:16:12.154 "data_offset": 256, 00:16:12.154 "data_size": 7936 00:16:12.154 }, 00:16:12.154 { 00:16:12.154 "name": "BaseBdev2", 00:16:12.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.154 "is_configured": false, 00:16:12.154 "data_offset": 0, 00:16:12.154 "data_size": 0 00:16:12.154 } 00:16:12.154 ] 00:16:12.154 }' 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.154 16:29:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.426 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:12.426 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.426 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.426 [2024-11-28 16:29:04.132222] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:12.426 [2024-11-28 16:29:04.132317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:16:12.426 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.426 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:12.426 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.426 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.426 [2024-11-28 16:29:04.140301] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:12.426 [2024-11-28 16:29:04.142131] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:12.426 [2024-11-28 16:29:04.142231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:12.426 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.426 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:12.426 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:12.426 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:12.426 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.426 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.426 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.426 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.426 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:12.426 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.426 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.426 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.427 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.427 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.427 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.427 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.427 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.427 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.700 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.700 "name": "Existed_Raid", 00:16:12.700 "uuid": "fc74a36a-cb16-430a-8d42-f93901f1ac52", 00:16:12.700 "strip_size_kb": 0, 00:16:12.700 "state": "configuring", 00:16:12.700 "raid_level": "raid1", 00:16:12.700 "superblock": true, 00:16:12.700 "num_base_bdevs": 2, 00:16:12.700 "num_base_bdevs_discovered": 1, 00:16:12.700 "num_base_bdevs_operational": 2, 00:16:12.700 "base_bdevs_list": [ 00:16:12.700 { 00:16:12.700 "name": "BaseBdev1", 00:16:12.700 "uuid": "52fd8a18-f623-4049-9393-1e4a52c9105b", 00:16:12.700 "is_configured": true, 00:16:12.700 "data_offset": 256, 00:16:12.700 "data_size": 7936 00:16:12.700 }, 00:16:12.700 { 00:16:12.700 "name": "BaseBdev2", 00:16:12.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.701 "is_configured": false, 00:16:12.701 "data_offset": 0, 00:16:12.701 "data_size": 0 00:16:12.701 } 00:16:12.701 ] 00:16:12.701 }' 00:16:12.701 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.701 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.970 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:16:12.970 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.970 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.970 [2024-11-28 16:29:04.587958] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:12.970 [2024-11-28 16:29:04.588260] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:12.970 [2024-11-28 16:29:04.588323] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:12.970 [2024-11-28 16:29:04.588465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:12.970 [2024-11-28 16:29:04.588581] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:12.970 [2024-11-28 16:29:04.588630] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:16:12.970 BaseBdev2 00:16:12.970 [2024-11-28 16:29:04.588793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.970 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.970 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:12.970 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:12.970 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:12.970 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:16:12.970 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:12.970 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:12.970 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:12.970 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.970 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.971 [ 00:16:12.971 { 00:16:12.971 "name": "BaseBdev2", 00:16:12.971 "aliases": [ 00:16:12.971 "c51be710-a4cb-41bf-a437-c16b93f8b0a8" 00:16:12.971 ], 00:16:12.971 "product_name": "Malloc disk", 00:16:12.971 "block_size": 4128, 00:16:12.971 "num_blocks": 8192, 00:16:12.971 "uuid": "c51be710-a4cb-41bf-a437-c16b93f8b0a8", 00:16:12.971 "md_size": 32, 00:16:12.971 "md_interleave": true, 00:16:12.971 "dif_type": 0, 00:16:12.971 "assigned_rate_limits": { 00:16:12.971 "rw_ios_per_sec": 0, 00:16:12.971 "rw_mbytes_per_sec": 0, 00:16:12.971 "r_mbytes_per_sec": 0, 00:16:12.971 "w_mbytes_per_sec": 0 00:16:12.971 }, 00:16:12.971 "claimed": true, 00:16:12.971 "claim_type": "exclusive_write", 00:16:12.971 "zoned": false, 00:16:12.971 "supported_io_types": { 00:16:12.971 "read": true, 00:16:12.971 "write": true, 00:16:12.971 "unmap": true, 00:16:12.971 "flush": true, 00:16:12.971 "reset": true, 00:16:12.971 "nvme_admin": false, 00:16:12.971 "nvme_io": false, 00:16:12.971 "nvme_io_md": false, 00:16:12.971 "write_zeroes": true, 00:16:12.971 "zcopy": true, 00:16:12.971 "get_zone_info": false, 00:16:12.971 "zone_management": false, 00:16:12.971 "zone_append": false, 00:16:12.971 "compare": false, 00:16:12.971 "compare_and_write": false, 00:16:12.971 "abort": true, 00:16:12.971 "seek_hole": false, 00:16:12.971 "seek_data": false, 00:16:12.971 "copy": true, 00:16:12.971 "nvme_iov_md": false 00:16:12.971 }, 00:16:12.971 "memory_domains": [ 00:16:12.971 { 00:16:12.971 "dma_device_id": "system", 00:16:12.971 "dma_device_type": 1 00:16:12.971 }, 00:16:12.971 { 00:16:12.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.971 "dma_device_type": 2 00:16:12.971 } 00:16:12.971 ], 00:16:12.971 "driver_specific": {} 00:16:12.971 } 00:16:12.971 ] 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.971 "name": "Existed_Raid", 00:16:12.971 "uuid": "fc74a36a-cb16-430a-8d42-f93901f1ac52", 00:16:12.971 "strip_size_kb": 0, 00:16:12.971 "state": "online", 00:16:12.971 "raid_level": "raid1", 00:16:12.971 "superblock": true, 00:16:12.971 "num_base_bdevs": 2, 00:16:12.971 "num_base_bdevs_discovered": 2, 00:16:12.971 "num_base_bdevs_operational": 2, 00:16:12.971 "base_bdevs_list": [ 00:16:12.971 { 00:16:12.971 "name": "BaseBdev1", 00:16:12.971 "uuid": "52fd8a18-f623-4049-9393-1e4a52c9105b", 00:16:12.971 "is_configured": true, 00:16:12.971 "data_offset": 256, 00:16:12.971 "data_size": 7936 00:16:12.971 }, 00:16:12.971 { 00:16:12.971 "name": "BaseBdev2", 00:16:12.971 "uuid": "c51be710-a4cb-41bf-a437-c16b93f8b0a8", 00:16:12.971 "is_configured": true, 00:16:12.971 "data_offset": 256, 00:16:12.971 "data_size": 7936 00:16:12.971 } 00:16:12.971 ] 00:16:12.971 }' 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.971 16:29:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.541 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:13.541 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:13.541 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:13.541 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:13.541 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:13.541 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:13.541 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:13.541 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:13.541 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.541 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.541 [2024-11-28 16:29:05.075411] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:13.541 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.541 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:13.541 "name": "Existed_Raid", 00:16:13.541 "aliases": [ 00:16:13.541 "fc74a36a-cb16-430a-8d42-f93901f1ac52" 00:16:13.541 ], 00:16:13.541 "product_name": "Raid Volume", 00:16:13.541 "block_size": 4128, 00:16:13.541 "num_blocks": 7936, 00:16:13.541 "uuid": "fc74a36a-cb16-430a-8d42-f93901f1ac52", 00:16:13.541 "md_size": 32, 00:16:13.541 "md_interleave": true, 00:16:13.541 "dif_type": 0, 00:16:13.541 "assigned_rate_limits": { 00:16:13.541 "rw_ios_per_sec": 0, 00:16:13.541 "rw_mbytes_per_sec": 0, 00:16:13.541 "r_mbytes_per_sec": 0, 00:16:13.541 "w_mbytes_per_sec": 0 00:16:13.541 }, 00:16:13.541 "claimed": false, 00:16:13.541 "zoned": false, 00:16:13.541 "supported_io_types": { 00:16:13.542 "read": true, 00:16:13.542 "write": true, 00:16:13.542 "unmap": false, 00:16:13.542 "flush": false, 00:16:13.542 "reset": true, 00:16:13.542 "nvme_admin": false, 00:16:13.542 "nvme_io": false, 00:16:13.542 "nvme_io_md": false, 00:16:13.542 "write_zeroes": true, 00:16:13.542 "zcopy": false, 00:16:13.542 "get_zone_info": false, 00:16:13.542 "zone_management": false, 00:16:13.542 "zone_append": false, 00:16:13.542 "compare": false, 00:16:13.542 "compare_and_write": false, 00:16:13.542 "abort": false, 00:16:13.542 "seek_hole": false, 00:16:13.542 "seek_data": false, 00:16:13.542 "copy": false, 00:16:13.542 "nvme_iov_md": false 00:16:13.542 }, 00:16:13.542 "memory_domains": [ 00:16:13.542 { 00:16:13.542 "dma_device_id": "system", 00:16:13.542 "dma_device_type": 1 00:16:13.542 }, 00:16:13.542 { 00:16:13.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.542 "dma_device_type": 2 00:16:13.542 }, 00:16:13.542 { 00:16:13.542 "dma_device_id": "system", 00:16:13.542 "dma_device_type": 1 00:16:13.542 }, 00:16:13.542 { 00:16:13.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.542 "dma_device_type": 2 00:16:13.542 } 00:16:13.542 ], 00:16:13.542 "driver_specific": { 00:16:13.542 "raid": { 00:16:13.542 "uuid": "fc74a36a-cb16-430a-8d42-f93901f1ac52", 00:16:13.542 "strip_size_kb": 0, 00:16:13.542 "state": "online", 00:16:13.542 "raid_level": "raid1", 00:16:13.542 "superblock": true, 00:16:13.542 "num_base_bdevs": 2, 00:16:13.542 "num_base_bdevs_discovered": 2, 00:16:13.542 "num_base_bdevs_operational": 2, 00:16:13.542 "base_bdevs_list": [ 00:16:13.542 { 00:16:13.542 "name": "BaseBdev1", 00:16:13.542 "uuid": "52fd8a18-f623-4049-9393-1e4a52c9105b", 00:16:13.542 "is_configured": true, 00:16:13.542 "data_offset": 256, 00:16:13.542 "data_size": 7936 00:16:13.542 }, 00:16:13.542 { 00:16:13.542 "name": "BaseBdev2", 00:16:13.542 "uuid": "c51be710-a4cb-41bf-a437-c16b93f8b0a8", 00:16:13.542 "is_configured": true, 00:16:13.542 "data_offset": 256, 00:16:13.542 "data_size": 7936 00:16:13.542 } 00:16:13.542 ] 00:16:13.542 } 00:16:13.542 } 00:16:13.542 }' 00:16:13.542 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:13.542 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:13.542 BaseBdev2' 00:16:13.542 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.542 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:13.542 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:13.542 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.542 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:13.542 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.542 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.542 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.542 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:13.542 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:13.542 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:13.542 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:13.542 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.542 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.542 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.542 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.542 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:13.542 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:13.542 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:13.542 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.542 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.802 [2024-11-28 16:29:05.310915] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.802 "name": "Existed_Raid", 00:16:13.802 "uuid": "fc74a36a-cb16-430a-8d42-f93901f1ac52", 00:16:13.802 "strip_size_kb": 0, 00:16:13.802 "state": "online", 00:16:13.802 "raid_level": "raid1", 00:16:13.802 "superblock": true, 00:16:13.802 "num_base_bdevs": 2, 00:16:13.802 "num_base_bdevs_discovered": 1, 00:16:13.802 "num_base_bdevs_operational": 1, 00:16:13.802 "base_bdevs_list": [ 00:16:13.802 { 00:16:13.802 "name": null, 00:16:13.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.802 "is_configured": false, 00:16:13.802 "data_offset": 0, 00:16:13.802 "data_size": 7936 00:16:13.802 }, 00:16:13.802 { 00:16:13.802 "name": "BaseBdev2", 00:16:13.802 "uuid": "c51be710-a4cb-41bf-a437-c16b93f8b0a8", 00:16:13.802 "is_configured": true, 00:16:13.802 "data_offset": 256, 00:16:13.802 "data_size": 7936 00:16:13.802 } 00:16:13.802 ] 00:16:13.802 }' 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.802 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.062 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:14.062 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:14.062 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.062 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:14.062 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.062 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.062 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.062 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:14.062 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:14.062 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:14.062 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.062 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.323 [2024-11-28 16:29:05.833608] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:14.323 [2024-11-28 16:29:05.833700] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.323 [2024-11-28 16:29:05.845610] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.323 [2024-11-28 16:29:05.845759] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.323 [2024-11-28 16:29:05.845817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:16:14.323 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.323 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:14.323 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:14.323 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.323 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:14.323 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.323 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.323 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.323 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:14.323 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:14.323 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:14.323 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 98768 00:16:14.323 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 98768 ']' 00:16:14.323 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 98768 00:16:14.323 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:14.323 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:14.323 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98768 00:16:14.323 killing process with pid 98768 00:16:14.323 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:14.323 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:14.323 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98768' 00:16:14.323 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 98768 00:16:14.323 16:29:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 98768 00:16:14.323 [2024-11-28 16:29:05.933215] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.323 [2024-11-28 16:29:05.934239] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:14.584 16:29:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:16:14.584 00:16:14.584 real 0m3.885s 00:16:14.584 user 0m6.084s 00:16:14.584 sys 0m0.840s 00:16:14.584 ************************************ 00:16:14.584 END TEST raid_state_function_test_sb_md_interleaved 00:16:14.584 ************************************ 00:16:14.584 16:29:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:14.584 16:29:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.584 16:29:06 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:16:14.584 16:29:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:14.584 16:29:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:14.584 16:29:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.584 ************************************ 00:16:14.584 START TEST raid_superblock_test_md_interleaved 00:16:14.584 ************************************ 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=99007 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 99007 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99007 ']' 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:14.584 16:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.844 [2024-11-28 16:29:06.360913] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:14.844 [2024-11-28 16:29:06.361154] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99007 ] 00:16:14.844 [2024-11-28 16:29:06.520701] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.844 [2024-11-28 16:29:06.568551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.844 [2024-11-28 16:29:06.611628] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.844 [2024-11-28 16:29:06.611745] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:15.415 malloc1 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:15.415 [2024-11-28 16:29:07.178243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:15.415 [2024-11-28 16:29:07.178308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.415 [2024-11-28 16:29:07.178335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:15.415 [2024-11-28 16:29:07.178352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.415 [2024-11-28 16:29:07.180174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.415 [2024-11-28 16:29:07.180214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:15.415 pt1 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.415 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:15.675 malloc2 00:16:15.675 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.675 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:15.675 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.675 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:15.675 [2024-11-28 16:29:07.223662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:15.675 [2024-11-28 16:29:07.223772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.675 [2024-11-28 16:29:07.223809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:15.675 [2024-11-28 16:29:07.223866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.675 [2024-11-28 16:29:07.228138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.675 [2024-11-28 16:29:07.228213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:15.675 pt2 00:16:15.675 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.675 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:15.675 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:15.675 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:15.675 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.675 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:15.675 [2024-11-28 16:29:07.236476] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:15.675 [2024-11-28 16:29:07.239362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:15.675 [2024-11-28 16:29:07.239588] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:15.675 [2024-11-28 16:29:07.239623] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:15.675 [2024-11-28 16:29:07.239746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:15.675 [2024-11-28 16:29:07.239870] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:15.675 [2024-11-28 16:29:07.239889] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:15.676 [2024-11-28 16:29:07.240034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.676 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.676 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:15.676 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.676 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.676 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:15.676 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:15.676 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:15.676 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.676 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.676 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.676 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.676 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.676 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.676 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.676 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:15.676 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.676 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.676 "name": "raid_bdev1", 00:16:15.676 "uuid": "a53cdce4-52c6-43ed-a04e-eb910e491474", 00:16:15.676 "strip_size_kb": 0, 00:16:15.676 "state": "online", 00:16:15.676 "raid_level": "raid1", 00:16:15.676 "superblock": true, 00:16:15.676 "num_base_bdevs": 2, 00:16:15.676 "num_base_bdevs_discovered": 2, 00:16:15.676 "num_base_bdevs_operational": 2, 00:16:15.676 "base_bdevs_list": [ 00:16:15.676 { 00:16:15.676 "name": "pt1", 00:16:15.676 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:15.676 "is_configured": true, 00:16:15.676 "data_offset": 256, 00:16:15.676 "data_size": 7936 00:16:15.676 }, 00:16:15.676 { 00:16:15.676 "name": "pt2", 00:16:15.676 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:15.676 "is_configured": true, 00:16:15.676 "data_offset": 256, 00:16:15.676 "data_size": 7936 00:16:15.676 } 00:16:15.676 ] 00:16:15.676 }' 00:16:15.676 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.676 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:15.936 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:15.936 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:15.936 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:15.936 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:15.936 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:15.936 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:16.196 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:16.196 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.196 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.196 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:16.196 [2024-11-28 16:29:07.712180] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.196 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.196 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:16.196 "name": "raid_bdev1", 00:16:16.196 "aliases": [ 00:16:16.196 "a53cdce4-52c6-43ed-a04e-eb910e491474" 00:16:16.196 ], 00:16:16.197 "product_name": "Raid Volume", 00:16:16.197 "block_size": 4128, 00:16:16.197 "num_blocks": 7936, 00:16:16.197 "uuid": "a53cdce4-52c6-43ed-a04e-eb910e491474", 00:16:16.197 "md_size": 32, 00:16:16.197 "md_interleave": true, 00:16:16.197 "dif_type": 0, 00:16:16.197 "assigned_rate_limits": { 00:16:16.197 "rw_ios_per_sec": 0, 00:16:16.197 "rw_mbytes_per_sec": 0, 00:16:16.197 "r_mbytes_per_sec": 0, 00:16:16.197 "w_mbytes_per_sec": 0 00:16:16.197 }, 00:16:16.197 "claimed": false, 00:16:16.197 "zoned": false, 00:16:16.197 "supported_io_types": { 00:16:16.197 "read": true, 00:16:16.197 "write": true, 00:16:16.197 "unmap": false, 00:16:16.197 "flush": false, 00:16:16.197 "reset": true, 00:16:16.197 "nvme_admin": false, 00:16:16.197 "nvme_io": false, 00:16:16.197 "nvme_io_md": false, 00:16:16.197 "write_zeroes": true, 00:16:16.197 "zcopy": false, 00:16:16.197 "get_zone_info": false, 00:16:16.197 "zone_management": false, 00:16:16.197 "zone_append": false, 00:16:16.197 "compare": false, 00:16:16.197 "compare_and_write": false, 00:16:16.197 "abort": false, 00:16:16.197 "seek_hole": false, 00:16:16.197 "seek_data": false, 00:16:16.197 "copy": false, 00:16:16.197 "nvme_iov_md": false 00:16:16.197 }, 00:16:16.197 "memory_domains": [ 00:16:16.197 { 00:16:16.197 "dma_device_id": "system", 00:16:16.197 "dma_device_type": 1 00:16:16.197 }, 00:16:16.197 { 00:16:16.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.197 "dma_device_type": 2 00:16:16.197 }, 00:16:16.197 { 00:16:16.197 "dma_device_id": "system", 00:16:16.197 "dma_device_type": 1 00:16:16.197 }, 00:16:16.197 { 00:16:16.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.197 "dma_device_type": 2 00:16:16.197 } 00:16:16.197 ], 00:16:16.197 "driver_specific": { 00:16:16.197 "raid": { 00:16:16.197 "uuid": "a53cdce4-52c6-43ed-a04e-eb910e491474", 00:16:16.197 "strip_size_kb": 0, 00:16:16.197 "state": "online", 00:16:16.197 "raid_level": "raid1", 00:16:16.197 "superblock": true, 00:16:16.197 "num_base_bdevs": 2, 00:16:16.197 "num_base_bdevs_discovered": 2, 00:16:16.197 "num_base_bdevs_operational": 2, 00:16:16.197 "base_bdevs_list": [ 00:16:16.197 { 00:16:16.197 "name": "pt1", 00:16:16.197 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:16.197 "is_configured": true, 00:16:16.197 "data_offset": 256, 00:16:16.197 "data_size": 7936 00:16:16.197 }, 00:16:16.197 { 00:16:16.197 "name": "pt2", 00:16:16.197 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:16.197 "is_configured": true, 00:16:16.197 "data_offset": 256, 00:16:16.197 "data_size": 7936 00:16:16.197 } 00:16:16.197 ] 00:16:16.197 } 00:16:16.197 } 00:16:16.197 }' 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:16.197 pt2' 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.197 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.197 [2024-11-28 16:29:07.959631] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.458 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.458 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a53cdce4-52c6-43ed-a04e-eb910e491474 00:16:16.458 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z a53cdce4-52c6-43ed-a04e-eb910e491474 ']' 00:16:16.458 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:16.458 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.458 16:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.458 [2024-11-28 16:29:08.003355] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:16.458 [2024-11-28 16:29:08.003384] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:16.458 [2024-11-28 16:29:08.003441] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.458 [2024-11-28 16:29:08.003504] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.458 [2024-11-28 16:29:08.003512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.458 [2024-11-28 16:29:08.131143] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:16.458 [2024-11-28 16:29:08.132991] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:16.458 [2024-11-28 16:29:08.133093] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:16.458 [2024-11-28 16:29:08.133192] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:16.458 [2024-11-28 16:29:08.133241] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:16.458 [2024-11-28 16:29:08.133269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:16:16.458 request: 00:16:16.458 { 00:16:16.458 "name": "raid_bdev1", 00:16:16.458 "raid_level": "raid1", 00:16:16.458 "base_bdevs": [ 00:16:16.458 "malloc1", 00:16:16.458 "malloc2" 00:16:16.458 ], 00:16:16.458 "superblock": false, 00:16:16.458 "method": "bdev_raid_create", 00:16:16.458 "req_id": 1 00:16:16.458 } 00:16:16.458 Got JSON-RPC error response 00:16:16.458 response: 00:16:16.458 { 00:16:16.458 "code": -17, 00:16:16.458 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:16.458 } 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:16.458 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.459 [2024-11-28 16:29:08.191004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:16.459 [2024-11-28 16:29:08.191088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.459 [2024-11-28 16:29:08.191121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:16.459 [2024-11-28 16:29:08.191152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.459 [2024-11-28 16:29:08.192972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.459 [2024-11-28 16:29:08.193041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:16.459 [2024-11-28 16:29:08.193106] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:16.459 [2024-11-28 16:29:08.193157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:16.459 pt1 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.459 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.719 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.719 "name": "raid_bdev1", 00:16:16.719 "uuid": "a53cdce4-52c6-43ed-a04e-eb910e491474", 00:16:16.719 "strip_size_kb": 0, 00:16:16.719 "state": "configuring", 00:16:16.719 "raid_level": "raid1", 00:16:16.719 "superblock": true, 00:16:16.719 "num_base_bdevs": 2, 00:16:16.719 "num_base_bdevs_discovered": 1, 00:16:16.719 "num_base_bdevs_operational": 2, 00:16:16.719 "base_bdevs_list": [ 00:16:16.719 { 00:16:16.719 "name": "pt1", 00:16:16.719 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:16.719 "is_configured": true, 00:16:16.719 "data_offset": 256, 00:16:16.719 "data_size": 7936 00:16:16.719 }, 00:16:16.719 { 00:16:16.719 "name": null, 00:16:16.719 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:16.719 "is_configured": false, 00:16:16.719 "data_offset": 256, 00:16:16.719 "data_size": 7936 00:16:16.719 } 00:16:16.719 ] 00:16:16.719 }' 00:16:16.719 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.719 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.979 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:16.979 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:16.979 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.980 [2024-11-28 16:29:08.594335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:16.980 [2024-11-28 16:29:08.594386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.980 [2024-11-28 16:29:08.594405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:16.980 [2024-11-28 16:29:08.594413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.980 [2024-11-28 16:29:08.594516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.980 [2024-11-28 16:29:08.594526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:16.980 [2024-11-28 16:29:08.594562] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:16.980 [2024-11-28 16:29:08.594576] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:16.980 [2024-11-28 16:29:08.594636] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:16.980 [2024-11-28 16:29:08.594643] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:16.980 [2024-11-28 16:29:08.594711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:16.980 [2024-11-28 16:29:08.594760] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:16.980 [2024-11-28 16:29:08.594771] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:16:16.980 [2024-11-28 16:29:08.594814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.980 pt2 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.980 "name": "raid_bdev1", 00:16:16.980 "uuid": "a53cdce4-52c6-43ed-a04e-eb910e491474", 00:16:16.980 "strip_size_kb": 0, 00:16:16.980 "state": "online", 00:16:16.980 "raid_level": "raid1", 00:16:16.980 "superblock": true, 00:16:16.980 "num_base_bdevs": 2, 00:16:16.980 "num_base_bdevs_discovered": 2, 00:16:16.980 "num_base_bdevs_operational": 2, 00:16:16.980 "base_bdevs_list": [ 00:16:16.980 { 00:16:16.980 "name": "pt1", 00:16:16.980 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:16.980 "is_configured": true, 00:16:16.980 "data_offset": 256, 00:16:16.980 "data_size": 7936 00:16:16.980 }, 00:16:16.980 { 00:16:16.980 "name": "pt2", 00:16:16.980 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:16.980 "is_configured": true, 00:16:16.980 "data_offset": 256, 00:16:16.980 "data_size": 7936 00:16:16.980 } 00:16:16.980 ] 00:16:16.980 }' 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.980 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.240 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:17.240 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:17.240 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:17.240 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:17.240 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:17.240 16:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:17.240 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:17.240 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:17.240 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.240 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.240 [2024-11-28 16:29:09.009891] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.500 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.500 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:17.500 "name": "raid_bdev1", 00:16:17.500 "aliases": [ 00:16:17.500 "a53cdce4-52c6-43ed-a04e-eb910e491474" 00:16:17.500 ], 00:16:17.500 "product_name": "Raid Volume", 00:16:17.500 "block_size": 4128, 00:16:17.500 "num_blocks": 7936, 00:16:17.500 "uuid": "a53cdce4-52c6-43ed-a04e-eb910e491474", 00:16:17.500 "md_size": 32, 00:16:17.500 "md_interleave": true, 00:16:17.500 "dif_type": 0, 00:16:17.500 "assigned_rate_limits": { 00:16:17.500 "rw_ios_per_sec": 0, 00:16:17.500 "rw_mbytes_per_sec": 0, 00:16:17.500 "r_mbytes_per_sec": 0, 00:16:17.500 "w_mbytes_per_sec": 0 00:16:17.500 }, 00:16:17.500 "claimed": false, 00:16:17.500 "zoned": false, 00:16:17.500 "supported_io_types": { 00:16:17.500 "read": true, 00:16:17.500 "write": true, 00:16:17.500 "unmap": false, 00:16:17.500 "flush": false, 00:16:17.500 "reset": true, 00:16:17.500 "nvme_admin": false, 00:16:17.500 "nvme_io": false, 00:16:17.500 "nvme_io_md": false, 00:16:17.500 "write_zeroes": true, 00:16:17.500 "zcopy": false, 00:16:17.500 "get_zone_info": false, 00:16:17.500 "zone_management": false, 00:16:17.500 "zone_append": false, 00:16:17.500 "compare": false, 00:16:17.500 "compare_and_write": false, 00:16:17.500 "abort": false, 00:16:17.500 "seek_hole": false, 00:16:17.500 "seek_data": false, 00:16:17.500 "copy": false, 00:16:17.500 "nvme_iov_md": false 00:16:17.500 }, 00:16:17.500 "memory_domains": [ 00:16:17.500 { 00:16:17.500 "dma_device_id": "system", 00:16:17.501 "dma_device_type": 1 00:16:17.501 }, 00:16:17.501 { 00:16:17.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.501 "dma_device_type": 2 00:16:17.501 }, 00:16:17.501 { 00:16:17.501 "dma_device_id": "system", 00:16:17.501 "dma_device_type": 1 00:16:17.501 }, 00:16:17.501 { 00:16:17.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.501 "dma_device_type": 2 00:16:17.501 } 00:16:17.501 ], 00:16:17.501 "driver_specific": { 00:16:17.501 "raid": { 00:16:17.501 "uuid": "a53cdce4-52c6-43ed-a04e-eb910e491474", 00:16:17.501 "strip_size_kb": 0, 00:16:17.501 "state": "online", 00:16:17.501 "raid_level": "raid1", 00:16:17.501 "superblock": true, 00:16:17.501 "num_base_bdevs": 2, 00:16:17.501 "num_base_bdevs_discovered": 2, 00:16:17.501 "num_base_bdevs_operational": 2, 00:16:17.501 "base_bdevs_list": [ 00:16:17.501 { 00:16:17.501 "name": "pt1", 00:16:17.501 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:17.501 "is_configured": true, 00:16:17.501 "data_offset": 256, 00:16:17.501 "data_size": 7936 00:16:17.501 }, 00:16:17.501 { 00:16:17.501 "name": "pt2", 00:16:17.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.501 "is_configured": true, 00:16:17.501 "data_offset": 256, 00:16:17.501 "data_size": 7936 00:16:17.501 } 00:16:17.501 ] 00:16:17.501 } 00:16:17.501 } 00:16:17.501 }' 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:17.501 pt2' 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:17.501 [2024-11-28 16:29:09.225492] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' a53cdce4-52c6-43ed-a04e-eb910e491474 '!=' a53cdce4-52c6-43ed-a04e-eb910e491474 ']' 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.501 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.501 [2024-11-28 16:29:09.269212] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:17.761 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.761 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:17.761 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.761 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.761 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.761 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.761 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:17.761 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.761 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.761 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.761 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.761 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.761 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.761 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.761 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.761 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.761 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.761 "name": "raid_bdev1", 00:16:17.761 "uuid": "a53cdce4-52c6-43ed-a04e-eb910e491474", 00:16:17.761 "strip_size_kb": 0, 00:16:17.761 "state": "online", 00:16:17.761 "raid_level": "raid1", 00:16:17.761 "superblock": true, 00:16:17.761 "num_base_bdevs": 2, 00:16:17.761 "num_base_bdevs_discovered": 1, 00:16:17.761 "num_base_bdevs_operational": 1, 00:16:17.761 "base_bdevs_list": [ 00:16:17.761 { 00:16:17.761 "name": null, 00:16:17.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.761 "is_configured": false, 00:16:17.761 "data_offset": 0, 00:16:17.761 "data_size": 7936 00:16:17.761 }, 00:16:17.761 { 00:16:17.761 "name": "pt2", 00:16:17.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.761 "is_configured": true, 00:16:17.761 "data_offset": 256, 00:16:17.761 "data_size": 7936 00:16:17.761 } 00:16:17.761 ] 00:16:17.761 }' 00:16:17.761 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.761 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.020 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:18.020 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.020 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.020 [2024-11-28 16:29:09.744356] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:18.020 [2024-11-28 16:29:09.744420] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:18.020 [2024-11-28 16:29:09.744497] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.020 [2024-11-28 16:29:09.744537] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.020 [2024-11-28 16:29:09.744545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:16:18.020 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.020 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.020 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.020 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:18.020 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.020 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.280 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:18.280 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:18.280 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:18.280 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:18.280 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:18.280 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.280 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.280 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.280 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:18.280 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:18.280 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:18.280 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:18.280 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:16:18.280 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:18.280 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.280 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.280 [2024-11-28 16:29:09.820245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:18.280 [2024-11-28 16:29:09.820332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.281 [2024-11-28 16:29:09.820380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:18.281 [2024-11-28 16:29:09.820409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.281 [2024-11-28 16:29:09.822282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.281 [2024-11-28 16:29:09.822347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:18.281 [2024-11-28 16:29:09.822434] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:18.281 [2024-11-28 16:29:09.822478] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:18.281 [2024-11-28 16:29:09.822568] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:18.281 [2024-11-28 16:29:09.822609] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:18.281 [2024-11-28 16:29:09.822713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:18.281 [2024-11-28 16:29:09.822800] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:18.281 [2024-11-28 16:29:09.822853] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:16:18.281 [2024-11-28 16:29:09.822946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.281 pt2 00:16:18.281 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.281 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:18.281 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.281 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.281 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.281 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.281 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:18.281 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.281 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.281 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.281 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.281 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.281 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.281 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.281 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.281 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.281 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.281 "name": "raid_bdev1", 00:16:18.281 "uuid": "a53cdce4-52c6-43ed-a04e-eb910e491474", 00:16:18.281 "strip_size_kb": 0, 00:16:18.281 "state": "online", 00:16:18.281 "raid_level": "raid1", 00:16:18.281 "superblock": true, 00:16:18.281 "num_base_bdevs": 2, 00:16:18.281 "num_base_bdevs_discovered": 1, 00:16:18.281 "num_base_bdevs_operational": 1, 00:16:18.281 "base_bdevs_list": [ 00:16:18.281 { 00:16:18.281 "name": null, 00:16:18.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.281 "is_configured": false, 00:16:18.281 "data_offset": 256, 00:16:18.281 "data_size": 7936 00:16:18.281 }, 00:16:18.281 { 00:16:18.281 "name": "pt2", 00:16:18.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.281 "is_configured": true, 00:16:18.281 "data_offset": 256, 00:16:18.281 "data_size": 7936 00:16:18.281 } 00:16:18.281 ] 00:16:18.281 }' 00:16:18.281 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.281 16:29:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.541 [2024-11-28 16:29:10.227624] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:18.541 [2024-11-28 16:29:10.227681] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:18.541 [2024-11-28 16:29:10.227744] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.541 [2024-11-28 16:29:10.227792] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.541 [2024-11-28 16:29:10.227841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.541 [2024-11-28 16:29:10.283523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:18.541 [2024-11-28 16:29:10.283573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.541 [2024-11-28 16:29:10.283605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:18.541 [2024-11-28 16:29:10.283620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.541 [2024-11-28 16:29:10.285441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.541 [2024-11-28 16:29:10.285480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:18.541 [2024-11-28 16:29:10.285520] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:18.541 [2024-11-28 16:29:10.285554] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:18.541 [2024-11-28 16:29:10.285633] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:18.541 [2024-11-28 16:29:10.285643] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:18.541 [2024-11-28 16:29:10.285657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:16:18.541 [2024-11-28 16:29:10.285687] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:18.541 [2024-11-28 16:29:10.285742] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:18.541 [2024-11-28 16:29:10.285751] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:18.541 [2024-11-28 16:29:10.285806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:18.541 [2024-11-28 16:29:10.285875] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:18.541 [2024-11-28 16:29:10.285884] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:18.541 [2024-11-28 16:29:10.285946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.541 pt1 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.541 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.801 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.801 "name": "raid_bdev1", 00:16:18.801 "uuid": "a53cdce4-52c6-43ed-a04e-eb910e491474", 00:16:18.801 "strip_size_kb": 0, 00:16:18.801 "state": "online", 00:16:18.801 "raid_level": "raid1", 00:16:18.801 "superblock": true, 00:16:18.801 "num_base_bdevs": 2, 00:16:18.801 "num_base_bdevs_discovered": 1, 00:16:18.801 "num_base_bdevs_operational": 1, 00:16:18.801 "base_bdevs_list": [ 00:16:18.801 { 00:16:18.801 "name": null, 00:16:18.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.801 "is_configured": false, 00:16:18.801 "data_offset": 256, 00:16:18.801 "data_size": 7936 00:16:18.801 }, 00:16:18.801 { 00:16:18.801 "name": "pt2", 00:16:18.801 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.801 "is_configured": true, 00:16:18.801 "data_offset": 256, 00:16:18.801 "data_size": 7936 00:16:18.801 } 00:16:18.801 ] 00:16:18.801 }' 00:16:18.801 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.801 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.062 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:19.062 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:19.062 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.062 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.062 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.062 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:19.062 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:19.062 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.062 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.062 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:19.062 [2024-11-28 16:29:10.762915] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.062 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.062 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' a53cdce4-52c6-43ed-a04e-eb910e491474 '!=' a53cdce4-52c6-43ed-a04e-eb910e491474 ']' 00:16:19.062 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 99007 00:16:19.062 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99007 ']' 00:16:19.062 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99007 00:16:19.062 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:19.062 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:19.062 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99007 00:16:19.322 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:19.322 killing process with pid 99007 00:16:19.322 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:19.322 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99007' 00:16:19.322 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 99007 00:16:19.322 16:29:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 99007 00:16:19.322 [2024-11-28 16:29:10.844137] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:19.322 [2024-11-28 16:29:10.844208] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.322 [2024-11-28 16:29:10.844259] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.322 [2024-11-28 16:29:10.844268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:19.322 [2024-11-28 16:29:10.867801] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:19.583 16:29:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:16:19.583 00:16:19.583 real 0m4.840s 00:16:19.583 user 0m7.848s 00:16:19.583 sys 0m1.061s 00:16:19.583 16:29:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:19.583 ************************************ 00:16:19.583 END TEST raid_superblock_test_md_interleaved 00:16:19.583 ************************************ 00:16:19.583 16:29:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.583 16:29:11 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:16:19.583 16:29:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:19.583 16:29:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:19.583 16:29:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:19.583 ************************************ 00:16:19.583 START TEST raid_rebuild_test_sb_md_interleaved 00:16:19.583 ************************************ 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=99319 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 99319 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99319 ']' 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:19.583 16:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.583 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:19.583 Zero copy mechanism will not be used. 00:16:19.583 [2024-11-28 16:29:11.304418] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:19.583 [2024-11-28 16:29:11.304561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99319 ] 00:16:19.843 [2024-11-28 16:29:11.471560] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.843 [2024-11-28 16:29:11.518691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.843 [2024-11-28 16:29:11.561425] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:19.843 [2024-11-28 16:29:11.561458] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.413 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:20.413 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:20.413 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:20.413 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:16:20.413 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.413 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.413 BaseBdev1_malloc 00:16:20.413 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.414 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:20.414 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.414 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.414 [2024-11-28 16:29:12.127636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:20.414 [2024-11-28 16:29:12.127698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.414 [2024-11-28 16:29:12.127723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:20.414 [2024-11-28 16:29:12.127731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.414 [2024-11-28 16:29:12.129573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.414 [2024-11-28 16:29:12.129625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:20.414 BaseBdev1 00:16:20.414 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.414 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:20.414 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:16:20.414 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.414 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.414 BaseBdev2_malloc 00:16:20.414 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.414 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:20.414 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.414 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.414 [2024-11-28 16:29:12.172649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:20.414 [2024-11-28 16:29:12.172784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.414 [2024-11-28 16:29:12.172879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:20.414 [2024-11-28 16:29:12.172910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.414 [2024-11-28 16:29:12.176900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.414 [2024-11-28 16:29:12.176966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:20.414 BaseBdev2 00:16:20.414 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.414 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:16:20.414 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.414 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.674 spare_malloc 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.674 spare_delay 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.674 [2024-11-28 16:29:12.211190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:20.674 [2024-11-28 16:29:12.211244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.674 [2024-11-28 16:29:12.211265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:20.674 [2024-11-28 16:29:12.211273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.674 [2024-11-28 16:29:12.213128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.674 [2024-11-28 16:29:12.213206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:20.674 spare 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.674 [2024-11-28 16:29:12.219205] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:20.674 [2024-11-28 16:29:12.221031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:20.674 [2024-11-28 16:29:12.221190] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:20.674 [2024-11-28 16:29:12.221204] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:20.674 [2024-11-28 16:29:12.221290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:20.674 [2024-11-28 16:29:12.221357] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:20.674 [2024-11-28 16:29:12.221367] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:20.674 [2024-11-28 16:29:12.221434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.674 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.675 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.675 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.675 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.675 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.675 "name": "raid_bdev1", 00:16:20.675 "uuid": "1742b8d6-2e94-4161-8bfc-88b047de1448", 00:16:20.675 "strip_size_kb": 0, 00:16:20.675 "state": "online", 00:16:20.675 "raid_level": "raid1", 00:16:20.675 "superblock": true, 00:16:20.675 "num_base_bdevs": 2, 00:16:20.675 "num_base_bdevs_discovered": 2, 00:16:20.675 "num_base_bdevs_operational": 2, 00:16:20.675 "base_bdevs_list": [ 00:16:20.675 { 00:16:20.675 "name": "BaseBdev1", 00:16:20.675 "uuid": "f6e645da-9539-50c6-bc70-5caaf9017ee9", 00:16:20.675 "is_configured": true, 00:16:20.675 "data_offset": 256, 00:16:20.675 "data_size": 7936 00:16:20.675 }, 00:16:20.675 { 00:16:20.675 "name": "BaseBdev2", 00:16:20.675 "uuid": "d6f1fbcc-1fdf-5841-8ea0-40200419c536", 00:16:20.675 "is_configured": true, 00:16:20.675 "data_offset": 256, 00:16:20.675 "data_size": 7936 00:16:20.675 } 00:16:20.675 ] 00:16:20.675 }' 00:16:20.675 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.675 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.935 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:20.935 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:20.935 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.935 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.935 [2024-11-28 16:29:12.634761] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:20.935 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.935 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:20.935 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:20.935 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.935 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.935 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.935 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.195 [2024-11-28 16:29:12.730298] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.195 "name": "raid_bdev1", 00:16:21.195 "uuid": "1742b8d6-2e94-4161-8bfc-88b047de1448", 00:16:21.195 "strip_size_kb": 0, 00:16:21.195 "state": "online", 00:16:21.195 "raid_level": "raid1", 00:16:21.195 "superblock": true, 00:16:21.195 "num_base_bdevs": 2, 00:16:21.195 "num_base_bdevs_discovered": 1, 00:16:21.195 "num_base_bdevs_operational": 1, 00:16:21.195 "base_bdevs_list": [ 00:16:21.195 { 00:16:21.195 "name": null, 00:16:21.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.195 "is_configured": false, 00:16:21.195 "data_offset": 0, 00:16:21.195 "data_size": 7936 00:16:21.195 }, 00:16:21.195 { 00:16:21.195 "name": "BaseBdev2", 00:16:21.195 "uuid": "d6f1fbcc-1fdf-5841-8ea0-40200419c536", 00:16:21.195 "is_configured": true, 00:16:21.195 "data_offset": 256, 00:16:21.195 "data_size": 7936 00:16:21.195 } 00:16:21.195 ] 00:16:21.195 }' 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.195 16:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.455 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:21.455 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.455 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.455 [2024-11-28 16:29:13.177546] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:21.455 [2024-11-28 16:29:13.180530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:21.455 [2024-11-28 16:29:13.182422] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:21.455 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.455 16:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.837 "name": "raid_bdev1", 00:16:22.837 "uuid": "1742b8d6-2e94-4161-8bfc-88b047de1448", 00:16:22.837 "strip_size_kb": 0, 00:16:22.837 "state": "online", 00:16:22.837 "raid_level": "raid1", 00:16:22.837 "superblock": true, 00:16:22.837 "num_base_bdevs": 2, 00:16:22.837 "num_base_bdevs_discovered": 2, 00:16:22.837 "num_base_bdevs_operational": 2, 00:16:22.837 "process": { 00:16:22.837 "type": "rebuild", 00:16:22.837 "target": "spare", 00:16:22.837 "progress": { 00:16:22.837 "blocks": 2560, 00:16:22.837 "percent": 32 00:16:22.837 } 00:16:22.837 }, 00:16:22.837 "base_bdevs_list": [ 00:16:22.837 { 00:16:22.837 "name": "spare", 00:16:22.837 "uuid": "2b97423b-6846-595a-a0a7-5e09b5e7c840", 00:16:22.837 "is_configured": true, 00:16:22.837 "data_offset": 256, 00:16:22.837 "data_size": 7936 00:16:22.837 }, 00:16:22.837 { 00:16:22.837 "name": "BaseBdev2", 00:16:22.837 "uuid": "d6f1fbcc-1fdf-5841-8ea0-40200419c536", 00:16:22.837 "is_configured": true, 00:16:22.837 "data_offset": 256, 00:16:22.837 "data_size": 7936 00:16:22.837 } 00:16:22.837 ] 00:16:22.837 }' 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.837 [2024-11-28 16:29:14.345304] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:22.837 [2024-11-28 16:29:14.387279] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:22.837 [2024-11-28 16:29:14.387340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.837 [2024-11-28 16:29:14.387358] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:22.837 [2024-11-28 16:29:14.387371] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.837 "name": "raid_bdev1", 00:16:22.837 "uuid": "1742b8d6-2e94-4161-8bfc-88b047de1448", 00:16:22.837 "strip_size_kb": 0, 00:16:22.837 "state": "online", 00:16:22.837 "raid_level": "raid1", 00:16:22.837 "superblock": true, 00:16:22.837 "num_base_bdevs": 2, 00:16:22.837 "num_base_bdevs_discovered": 1, 00:16:22.837 "num_base_bdevs_operational": 1, 00:16:22.837 "base_bdevs_list": [ 00:16:22.837 { 00:16:22.837 "name": null, 00:16:22.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.837 "is_configured": false, 00:16:22.837 "data_offset": 0, 00:16:22.837 "data_size": 7936 00:16:22.837 }, 00:16:22.837 { 00:16:22.837 "name": "BaseBdev2", 00:16:22.837 "uuid": "d6f1fbcc-1fdf-5841-8ea0-40200419c536", 00:16:22.837 "is_configured": true, 00:16:22.837 "data_offset": 256, 00:16:22.837 "data_size": 7936 00:16:22.837 } 00:16:22.837 ] 00:16:22.837 }' 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.837 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.097 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:23.097 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.097 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:23.097 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:23.097 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.097 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.097 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.097 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.097 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.097 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.356 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.356 "name": "raid_bdev1", 00:16:23.356 "uuid": "1742b8d6-2e94-4161-8bfc-88b047de1448", 00:16:23.356 "strip_size_kb": 0, 00:16:23.356 "state": "online", 00:16:23.356 "raid_level": "raid1", 00:16:23.356 "superblock": true, 00:16:23.356 "num_base_bdevs": 2, 00:16:23.356 "num_base_bdevs_discovered": 1, 00:16:23.356 "num_base_bdevs_operational": 1, 00:16:23.356 "base_bdevs_list": [ 00:16:23.356 { 00:16:23.356 "name": null, 00:16:23.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.356 "is_configured": false, 00:16:23.356 "data_offset": 0, 00:16:23.356 "data_size": 7936 00:16:23.356 }, 00:16:23.356 { 00:16:23.356 "name": "BaseBdev2", 00:16:23.356 "uuid": "d6f1fbcc-1fdf-5841-8ea0-40200419c536", 00:16:23.356 "is_configured": true, 00:16:23.356 "data_offset": 256, 00:16:23.356 "data_size": 7936 00:16:23.356 } 00:16:23.356 ] 00:16:23.356 }' 00:16:23.356 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.356 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:23.356 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.356 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:23.356 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:23.356 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.356 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.356 [2024-11-28 16:29:14.969929] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:23.356 [2024-11-28 16:29:14.972899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:23.356 [2024-11-28 16:29:14.974762] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:23.356 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.357 16:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:24.296 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:24.296 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.296 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:24.296 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:24.296 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.296 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.296 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.296 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.296 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.296 16:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.296 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.296 "name": "raid_bdev1", 00:16:24.296 "uuid": "1742b8d6-2e94-4161-8bfc-88b047de1448", 00:16:24.296 "strip_size_kb": 0, 00:16:24.296 "state": "online", 00:16:24.296 "raid_level": "raid1", 00:16:24.296 "superblock": true, 00:16:24.296 "num_base_bdevs": 2, 00:16:24.296 "num_base_bdevs_discovered": 2, 00:16:24.296 "num_base_bdevs_operational": 2, 00:16:24.296 "process": { 00:16:24.296 "type": "rebuild", 00:16:24.296 "target": "spare", 00:16:24.296 "progress": { 00:16:24.296 "blocks": 2560, 00:16:24.296 "percent": 32 00:16:24.296 } 00:16:24.296 }, 00:16:24.296 "base_bdevs_list": [ 00:16:24.296 { 00:16:24.296 "name": "spare", 00:16:24.296 "uuid": "2b97423b-6846-595a-a0a7-5e09b5e7c840", 00:16:24.296 "is_configured": true, 00:16:24.296 "data_offset": 256, 00:16:24.296 "data_size": 7936 00:16:24.296 }, 00:16:24.296 { 00:16:24.296 "name": "BaseBdev2", 00:16:24.296 "uuid": "d6f1fbcc-1fdf-5841-8ea0-40200419c536", 00:16:24.296 "is_configured": true, 00:16:24.296 "data_offset": 256, 00:16:24.296 "data_size": 7936 00:16:24.296 } 00:16:24.296 ] 00:16:24.296 }' 00:16:24.296 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:24.556 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=615 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.556 "name": "raid_bdev1", 00:16:24.556 "uuid": "1742b8d6-2e94-4161-8bfc-88b047de1448", 00:16:24.556 "strip_size_kb": 0, 00:16:24.556 "state": "online", 00:16:24.556 "raid_level": "raid1", 00:16:24.556 "superblock": true, 00:16:24.556 "num_base_bdevs": 2, 00:16:24.556 "num_base_bdevs_discovered": 2, 00:16:24.556 "num_base_bdevs_operational": 2, 00:16:24.556 "process": { 00:16:24.556 "type": "rebuild", 00:16:24.556 "target": "spare", 00:16:24.556 "progress": { 00:16:24.556 "blocks": 2816, 00:16:24.556 "percent": 35 00:16:24.556 } 00:16:24.556 }, 00:16:24.556 "base_bdevs_list": [ 00:16:24.556 { 00:16:24.556 "name": "spare", 00:16:24.556 "uuid": "2b97423b-6846-595a-a0a7-5e09b5e7c840", 00:16:24.556 "is_configured": true, 00:16:24.556 "data_offset": 256, 00:16:24.556 "data_size": 7936 00:16:24.556 }, 00:16:24.556 { 00:16:24.556 "name": "BaseBdev2", 00:16:24.556 "uuid": "d6f1fbcc-1fdf-5841-8ea0-40200419c536", 00:16:24.556 "is_configured": true, 00:16:24.556 "data_offset": 256, 00:16:24.556 "data_size": 7936 00:16:24.556 } 00:16:24.556 ] 00:16:24.556 }' 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:24.556 16:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:25.497 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:25.497 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.497 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.497 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.497 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.497 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.497 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.497 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.497 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.497 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.758 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.758 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.758 "name": "raid_bdev1", 00:16:25.758 "uuid": "1742b8d6-2e94-4161-8bfc-88b047de1448", 00:16:25.758 "strip_size_kb": 0, 00:16:25.758 "state": "online", 00:16:25.758 "raid_level": "raid1", 00:16:25.758 "superblock": true, 00:16:25.758 "num_base_bdevs": 2, 00:16:25.758 "num_base_bdevs_discovered": 2, 00:16:25.758 "num_base_bdevs_operational": 2, 00:16:25.758 "process": { 00:16:25.758 "type": "rebuild", 00:16:25.758 "target": "spare", 00:16:25.758 "progress": { 00:16:25.758 "blocks": 5632, 00:16:25.758 "percent": 70 00:16:25.758 } 00:16:25.758 }, 00:16:25.758 "base_bdevs_list": [ 00:16:25.758 { 00:16:25.758 "name": "spare", 00:16:25.758 "uuid": "2b97423b-6846-595a-a0a7-5e09b5e7c840", 00:16:25.758 "is_configured": true, 00:16:25.758 "data_offset": 256, 00:16:25.758 "data_size": 7936 00:16:25.758 }, 00:16:25.758 { 00:16:25.758 "name": "BaseBdev2", 00:16:25.758 "uuid": "d6f1fbcc-1fdf-5841-8ea0-40200419c536", 00:16:25.758 "is_configured": true, 00:16:25.758 "data_offset": 256, 00:16:25.758 "data_size": 7936 00:16:25.758 } 00:16:25.758 ] 00:16:25.758 }' 00:16:25.758 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.758 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.758 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.758 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.758 16:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:26.326 [2024-11-28 16:29:18.085502] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:26.326 [2024-11-28 16:29:18.085643] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:26.326 [2024-11-28 16:29:18.085772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.897 "name": "raid_bdev1", 00:16:26.897 "uuid": "1742b8d6-2e94-4161-8bfc-88b047de1448", 00:16:26.897 "strip_size_kb": 0, 00:16:26.897 "state": "online", 00:16:26.897 "raid_level": "raid1", 00:16:26.897 "superblock": true, 00:16:26.897 "num_base_bdevs": 2, 00:16:26.897 "num_base_bdevs_discovered": 2, 00:16:26.897 "num_base_bdevs_operational": 2, 00:16:26.897 "base_bdevs_list": [ 00:16:26.897 { 00:16:26.897 "name": "spare", 00:16:26.897 "uuid": "2b97423b-6846-595a-a0a7-5e09b5e7c840", 00:16:26.897 "is_configured": true, 00:16:26.897 "data_offset": 256, 00:16:26.897 "data_size": 7936 00:16:26.897 }, 00:16:26.897 { 00:16:26.897 "name": "BaseBdev2", 00:16:26.897 "uuid": "d6f1fbcc-1fdf-5841-8ea0-40200419c536", 00:16:26.897 "is_configured": true, 00:16:26.897 "data_offset": 256, 00:16:26.897 "data_size": 7936 00:16:26.897 } 00:16:26.897 ] 00:16:26.897 }' 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.897 "name": "raid_bdev1", 00:16:26.897 "uuid": "1742b8d6-2e94-4161-8bfc-88b047de1448", 00:16:26.897 "strip_size_kb": 0, 00:16:26.897 "state": "online", 00:16:26.897 "raid_level": "raid1", 00:16:26.897 "superblock": true, 00:16:26.897 "num_base_bdevs": 2, 00:16:26.897 "num_base_bdevs_discovered": 2, 00:16:26.897 "num_base_bdevs_operational": 2, 00:16:26.897 "base_bdevs_list": [ 00:16:26.897 { 00:16:26.897 "name": "spare", 00:16:26.897 "uuid": "2b97423b-6846-595a-a0a7-5e09b5e7c840", 00:16:26.897 "is_configured": true, 00:16:26.897 "data_offset": 256, 00:16:26.897 "data_size": 7936 00:16:26.897 }, 00:16:26.897 { 00:16:26.897 "name": "BaseBdev2", 00:16:26.897 "uuid": "d6f1fbcc-1fdf-5841-8ea0-40200419c536", 00:16:26.897 "is_configured": true, 00:16:26.897 "data_offset": 256, 00:16:26.897 "data_size": 7936 00:16:26.897 } 00:16:26.897 ] 00:16:26.897 }' 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.897 "name": "raid_bdev1", 00:16:26.897 "uuid": "1742b8d6-2e94-4161-8bfc-88b047de1448", 00:16:26.897 "strip_size_kb": 0, 00:16:26.897 "state": "online", 00:16:26.897 "raid_level": "raid1", 00:16:26.897 "superblock": true, 00:16:26.897 "num_base_bdevs": 2, 00:16:26.897 "num_base_bdevs_discovered": 2, 00:16:26.897 "num_base_bdevs_operational": 2, 00:16:26.897 "base_bdevs_list": [ 00:16:26.897 { 00:16:26.897 "name": "spare", 00:16:26.897 "uuid": "2b97423b-6846-595a-a0a7-5e09b5e7c840", 00:16:26.897 "is_configured": true, 00:16:26.897 "data_offset": 256, 00:16:26.897 "data_size": 7936 00:16:26.897 }, 00:16:26.897 { 00:16:26.897 "name": "BaseBdev2", 00:16:26.897 "uuid": "d6f1fbcc-1fdf-5841-8ea0-40200419c536", 00:16:26.897 "is_configured": true, 00:16:26.897 "data_offset": 256, 00:16:26.897 "data_size": 7936 00:16:26.897 } 00:16:26.897 ] 00:16:26.897 }' 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.897 16:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.467 [2024-11-28 16:29:19.035969] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:27.467 [2024-11-28 16:29:19.036010] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.467 [2024-11-28 16:29:19.036084] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.467 [2024-11-28 16:29:19.036166] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.467 [2024-11-28 16:29:19.036179] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.467 [2024-11-28 16:29:19.099921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:27.467 [2024-11-28 16:29:19.099982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.467 [2024-11-28 16:29:19.100001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:27.467 [2024-11-28 16:29:19.100027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.467 [2024-11-28 16:29:19.101910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.467 [2024-11-28 16:29:19.101990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:27.467 [2024-11-28 16:29:19.102048] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:27.467 [2024-11-28 16:29:19.102097] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:27.467 [2024-11-28 16:29:19.102200] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:27.467 spare 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.467 [2024-11-28 16:29:19.202088] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:27.467 [2024-11-28 16:29:19.202155] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:27.467 [2024-11-28 16:29:19.202282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:27.467 [2024-11-28 16:29:19.202385] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:27.467 [2024-11-28 16:29:19.202397] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:27.467 [2024-11-28 16:29:19.202471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:27.467 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.468 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.468 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.468 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.468 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.468 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.468 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.468 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.468 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.726 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.726 "name": "raid_bdev1", 00:16:27.726 "uuid": "1742b8d6-2e94-4161-8bfc-88b047de1448", 00:16:27.726 "strip_size_kb": 0, 00:16:27.726 "state": "online", 00:16:27.726 "raid_level": "raid1", 00:16:27.726 "superblock": true, 00:16:27.726 "num_base_bdevs": 2, 00:16:27.726 "num_base_bdevs_discovered": 2, 00:16:27.726 "num_base_bdevs_operational": 2, 00:16:27.726 "base_bdevs_list": [ 00:16:27.726 { 00:16:27.726 "name": "spare", 00:16:27.726 "uuid": "2b97423b-6846-595a-a0a7-5e09b5e7c840", 00:16:27.726 "is_configured": true, 00:16:27.726 "data_offset": 256, 00:16:27.726 "data_size": 7936 00:16:27.726 }, 00:16:27.726 { 00:16:27.726 "name": "BaseBdev2", 00:16:27.726 "uuid": "d6f1fbcc-1fdf-5841-8ea0-40200419c536", 00:16:27.726 "is_configured": true, 00:16:27.726 "data_offset": 256, 00:16:27.726 "data_size": 7936 00:16:27.726 } 00:16:27.726 ] 00:16:27.726 }' 00:16:27.726 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.726 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.985 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:27.985 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.985 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:27.985 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:27.985 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.985 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.985 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.985 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.985 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.985 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.985 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.985 "name": "raid_bdev1", 00:16:27.985 "uuid": "1742b8d6-2e94-4161-8bfc-88b047de1448", 00:16:27.985 "strip_size_kb": 0, 00:16:27.985 "state": "online", 00:16:27.985 "raid_level": "raid1", 00:16:27.985 "superblock": true, 00:16:27.985 "num_base_bdevs": 2, 00:16:27.985 "num_base_bdevs_discovered": 2, 00:16:27.985 "num_base_bdevs_operational": 2, 00:16:27.985 "base_bdevs_list": [ 00:16:27.985 { 00:16:27.985 "name": "spare", 00:16:27.985 "uuid": "2b97423b-6846-595a-a0a7-5e09b5e7c840", 00:16:27.985 "is_configured": true, 00:16:27.985 "data_offset": 256, 00:16:27.985 "data_size": 7936 00:16:27.985 }, 00:16:27.985 { 00:16:27.985 "name": "BaseBdev2", 00:16:27.985 "uuid": "d6f1fbcc-1fdf-5841-8ea0-40200419c536", 00:16:27.985 "is_configured": true, 00:16:27.985 "data_offset": 256, 00:16:27.985 "data_size": 7936 00:16:27.985 } 00:16:27.985 ] 00:16:27.985 }' 00:16:27.985 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.244 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:28.244 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.244 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:28.244 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.244 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.244 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.244 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:28.244 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.244 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.244 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:28.244 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.244 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.244 [2024-11-28 16:29:19.858625] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:28.244 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.244 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:28.244 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.244 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.244 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.244 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.244 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:28.245 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.245 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.245 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.245 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.245 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.245 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.245 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.245 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.245 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.245 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.245 "name": "raid_bdev1", 00:16:28.245 "uuid": "1742b8d6-2e94-4161-8bfc-88b047de1448", 00:16:28.245 "strip_size_kb": 0, 00:16:28.245 "state": "online", 00:16:28.245 "raid_level": "raid1", 00:16:28.245 "superblock": true, 00:16:28.245 "num_base_bdevs": 2, 00:16:28.245 "num_base_bdevs_discovered": 1, 00:16:28.245 "num_base_bdevs_operational": 1, 00:16:28.245 "base_bdevs_list": [ 00:16:28.245 { 00:16:28.245 "name": null, 00:16:28.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.245 "is_configured": false, 00:16:28.245 "data_offset": 0, 00:16:28.245 "data_size": 7936 00:16:28.245 }, 00:16:28.245 { 00:16:28.245 "name": "BaseBdev2", 00:16:28.245 "uuid": "d6f1fbcc-1fdf-5841-8ea0-40200419c536", 00:16:28.245 "is_configured": true, 00:16:28.245 "data_offset": 256, 00:16:28.245 "data_size": 7936 00:16:28.245 } 00:16:28.245 ] 00:16:28.245 }' 00:16:28.245 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.245 16:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.814 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:28.814 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.814 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.814 [2024-11-28 16:29:20.293959] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:28.814 [2024-11-28 16:29:20.294149] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:28.814 [2024-11-28 16:29:20.294225] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:28.814 [2024-11-28 16:29:20.294288] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:28.814 [2024-11-28 16:29:20.297118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:28.814 [2024-11-28 16:29:20.299034] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:28.814 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.814 16:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.753 "name": "raid_bdev1", 00:16:29.753 "uuid": "1742b8d6-2e94-4161-8bfc-88b047de1448", 00:16:29.753 "strip_size_kb": 0, 00:16:29.753 "state": "online", 00:16:29.753 "raid_level": "raid1", 00:16:29.753 "superblock": true, 00:16:29.753 "num_base_bdevs": 2, 00:16:29.753 "num_base_bdevs_discovered": 2, 00:16:29.753 "num_base_bdevs_operational": 2, 00:16:29.753 "process": { 00:16:29.753 "type": "rebuild", 00:16:29.753 "target": "spare", 00:16:29.753 "progress": { 00:16:29.753 "blocks": 2560, 00:16:29.753 "percent": 32 00:16:29.753 } 00:16:29.753 }, 00:16:29.753 "base_bdevs_list": [ 00:16:29.753 { 00:16:29.753 "name": "spare", 00:16:29.753 "uuid": "2b97423b-6846-595a-a0a7-5e09b5e7c840", 00:16:29.753 "is_configured": true, 00:16:29.753 "data_offset": 256, 00:16:29.753 "data_size": 7936 00:16:29.753 }, 00:16:29.753 { 00:16:29.753 "name": "BaseBdev2", 00:16:29.753 "uuid": "d6f1fbcc-1fdf-5841-8ea0-40200419c536", 00:16:29.753 "is_configured": true, 00:16:29.753 "data_offset": 256, 00:16:29.753 "data_size": 7936 00:16:29.753 } 00:16:29.753 ] 00:16:29.753 }' 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.753 [2024-11-28 16:29:21.449953] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.753 [2024-11-28 16:29:21.503126] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:29.753 [2024-11-28 16:29:21.503245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.753 [2024-11-28 16:29:21.503281] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.753 [2024-11-28 16:29:21.503301] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.753 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.014 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.014 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.014 "name": "raid_bdev1", 00:16:30.014 "uuid": "1742b8d6-2e94-4161-8bfc-88b047de1448", 00:16:30.014 "strip_size_kb": 0, 00:16:30.014 "state": "online", 00:16:30.014 "raid_level": "raid1", 00:16:30.014 "superblock": true, 00:16:30.014 "num_base_bdevs": 2, 00:16:30.014 "num_base_bdevs_discovered": 1, 00:16:30.014 "num_base_bdevs_operational": 1, 00:16:30.014 "base_bdevs_list": [ 00:16:30.014 { 00:16:30.014 "name": null, 00:16:30.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.014 "is_configured": false, 00:16:30.014 "data_offset": 0, 00:16:30.014 "data_size": 7936 00:16:30.014 }, 00:16:30.014 { 00:16:30.014 "name": "BaseBdev2", 00:16:30.014 "uuid": "d6f1fbcc-1fdf-5841-8ea0-40200419c536", 00:16:30.014 "is_configured": true, 00:16:30.014 "data_offset": 256, 00:16:30.014 "data_size": 7936 00:16:30.014 } 00:16:30.014 ] 00:16:30.014 }' 00:16:30.014 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.014 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.274 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:30.274 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.274 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.274 [2024-11-28 16:29:21.978051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:30.274 [2024-11-28 16:29:21.978160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.274 [2024-11-28 16:29:21.978205] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:30.274 [2024-11-28 16:29:21.978233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.274 [2024-11-28 16:29:21.978435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.274 [2024-11-28 16:29:21.978485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:30.274 [2024-11-28 16:29:21.978560] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:30.274 [2024-11-28 16:29:21.978596] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:30.274 [2024-11-28 16:29:21.978640] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:30.274 [2024-11-28 16:29:21.978710] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:30.274 [2024-11-28 16:29:21.981091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:30.274 [2024-11-28 16:29:21.982968] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:30.274 spare 00:16:30.274 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.274 16:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:31.657 16:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.657 16:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.657 16:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.657 16:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.657 16:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.657 16:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.657 16:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.657 16:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.657 16:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.657 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.657 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.657 "name": "raid_bdev1", 00:16:31.657 "uuid": "1742b8d6-2e94-4161-8bfc-88b047de1448", 00:16:31.657 "strip_size_kb": 0, 00:16:31.657 "state": "online", 00:16:31.657 "raid_level": "raid1", 00:16:31.657 "superblock": true, 00:16:31.657 "num_base_bdevs": 2, 00:16:31.657 "num_base_bdevs_discovered": 2, 00:16:31.657 "num_base_bdevs_operational": 2, 00:16:31.657 "process": { 00:16:31.657 "type": "rebuild", 00:16:31.657 "target": "spare", 00:16:31.657 "progress": { 00:16:31.657 "blocks": 2560, 00:16:31.657 "percent": 32 00:16:31.657 } 00:16:31.657 }, 00:16:31.657 "base_bdevs_list": [ 00:16:31.657 { 00:16:31.657 "name": "spare", 00:16:31.657 "uuid": "2b97423b-6846-595a-a0a7-5e09b5e7c840", 00:16:31.657 "is_configured": true, 00:16:31.657 "data_offset": 256, 00:16:31.657 "data_size": 7936 00:16:31.657 }, 00:16:31.657 { 00:16:31.657 "name": "BaseBdev2", 00:16:31.657 "uuid": "d6f1fbcc-1fdf-5841-8ea0-40200419c536", 00:16:31.657 "is_configured": true, 00:16:31.657 "data_offset": 256, 00:16:31.657 "data_size": 7936 00:16:31.657 } 00:16:31.657 ] 00:16:31.657 }' 00:16:31.657 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.657 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.657 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.657 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.657 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:31.657 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.657 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.657 [2024-11-28 16:29:23.137750] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:31.657 [2024-11-28 16:29:23.186930] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:31.657 [2024-11-28 16:29:23.186991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.657 [2024-11-28 16:29:23.187004] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:31.657 [2024-11-28 16:29:23.187013] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:31.657 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.657 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:31.657 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.657 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.657 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.657 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.657 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:31.657 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.657 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.657 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.657 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.658 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.658 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.658 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.658 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.658 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.658 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.658 "name": "raid_bdev1", 00:16:31.658 "uuid": "1742b8d6-2e94-4161-8bfc-88b047de1448", 00:16:31.658 "strip_size_kb": 0, 00:16:31.658 "state": "online", 00:16:31.658 "raid_level": "raid1", 00:16:31.658 "superblock": true, 00:16:31.658 "num_base_bdevs": 2, 00:16:31.658 "num_base_bdevs_discovered": 1, 00:16:31.658 "num_base_bdevs_operational": 1, 00:16:31.658 "base_bdevs_list": [ 00:16:31.658 { 00:16:31.658 "name": null, 00:16:31.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.658 "is_configured": false, 00:16:31.658 "data_offset": 0, 00:16:31.658 "data_size": 7936 00:16:31.658 }, 00:16:31.658 { 00:16:31.658 "name": "BaseBdev2", 00:16:31.658 "uuid": "d6f1fbcc-1fdf-5841-8ea0-40200419c536", 00:16:31.658 "is_configured": true, 00:16:31.658 "data_offset": 256, 00:16:31.658 "data_size": 7936 00:16:31.658 } 00:16:31.658 ] 00:16:31.658 }' 00:16:31.658 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.658 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.918 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:31.918 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.918 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:31.918 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:31.918 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.918 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.918 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.918 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.918 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.918 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.178 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.178 "name": "raid_bdev1", 00:16:32.178 "uuid": "1742b8d6-2e94-4161-8bfc-88b047de1448", 00:16:32.178 "strip_size_kb": 0, 00:16:32.178 "state": "online", 00:16:32.178 "raid_level": "raid1", 00:16:32.178 "superblock": true, 00:16:32.178 "num_base_bdevs": 2, 00:16:32.179 "num_base_bdevs_discovered": 1, 00:16:32.179 "num_base_bdevs_operational": 1, 00:16:32.179 "base_bdevs_list": [ 00:16:32.179 { 00:16:32.179 "name": null, 00:16:32.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.179 "is_configured": false, 00:16:32.179 "data_offset": 0, 00:16:32.179 "data_size": 7936 00:16:32.179 }, 00:16:32.179 { 00:16:32.179 "name": "BaseBdev2", 00:16:32.179 "uuid": "d6f1fbcc-1fdf-5841-8ea0-40200419c536", 00:16:32.179 "is_configured": true, 00:16:32.179 "data_offset": 256, 00:16:32.179 "data_size": 7936 00:16:32.179 } 00:16:32.179 ] 00:16:32.179 }' 00:16:32.179 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.179 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:32.179 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.179 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:32.179 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:32.179 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.179 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.179 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.179 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:32.179 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.179 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.179 [2024-11-28 16:29:23.801457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:32.179 [2024-11-28 16:29:23.801571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.179 [2024-11-28 16:29:23.801622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:32.179 [2024-11-28 16:29:23.801654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.179 [2024-11-28 16:29:23.801814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.179 [2024-11-28 16:29:23.801885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:32.179 [2024-11-28 16:29:23.801963] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:32.179 [2024-11-28 16:29:23.802018] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:32.179 [2024-11-28 16:29:23.802047] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:32.179 [2024-11-28 16:29:23.802062] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:32.179 BaseBdev1 00:16:32.179 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.179 16:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:33.120 16:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:33.120 16:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.120 16:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.120 16:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.120 16:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.120 16:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:33.120 16:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.120 16:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.120 16:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.120 16:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.120 16:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.120 16:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.120 16:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.120 16:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.120 16:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.120 16:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.120 "name": "raid_bdev1", 00:16:33.120 "uuid": "1742b8d6-2e94-4161-8bfc-88b047de1448", 00:16:33.120 "strip_size_kb": 0, 00:16:33.120 "state": "online", 00:16:33.120 "raid_level": "raid1", 00:16:33.120 "superblock": true, 00:16:33.120 "num_base_bdevs": 2, 00:16:33.120 "num_base_bdevs_discovered": 1, 00:16:33.120 "num_base_bdevs_operational": 1, 00:16:33.120 "base_bdevs_list": [ 00:16:33.120 { 00:16:33.120 "name": null, 00:16:33.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.120 "is_configured": false, 00:16:33.120 "data_offset": 0, 00:16:33.120 "data_size": 7936 00:16:33.120 }, 00:16:33.120 { 00:16:33.120 "name": "BaseBdev2", 00:16:33.120 "uuid": "d6f1fbcc-1fdf-5841-8ea0-40200419c536", 00:16:33.120 "is_configured": true, 00:16:33.120 "data_offset": 256, 00:16:33.120 "data_size": 7936 00:16:33.120 } 00:16:33.120 ] 00:16:33.120 }' 00:16:33.120 16:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.120 16:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.690 "name": "raid_bdev1", 00:16:33.690 "uuid": "1742b8d6-2e94-4161-8bfc-88b047de1448", 00:16:33.690 "strip_size_kb": 0, 00:16:33.690 "state": "online", 00:16:33.690 "raid_level": "raid1", 00:16:33.690 "superblock": true, 00:16:33.690 "num_base_bdevs": 2, 00:16:33.690 "num_base_bdevs_discovered": 1, 00:16:33.690 "num_base_bdevs_operational": 1, 00:16:33.690 "base_bdevs_list": [ 00:16:33.690 { 00:16:33.690 "name": null, 00:16:33.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.690 "is_configured": false, 00:16:33.690 "data_offset": 0, 00:16:33.690 "data_size": 7936 00:16:33.690 }, 00:16:33.690 { 00:16:33.690 "name": "BaseBdev2", 00:16:33.690 "uuid": "d6f1fbcc-1fdf-5841-8ea0-40200419c536", 00:16:33.690 "is_configured": true, 00:16:33.690 "data_offset": 256, 00:16:33.690 "data_size": 7936 00:16:33.690 } 00:16:33.690 ] 00:16:33.690 }' 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.690 [2024-11-28 16:29:25.362781] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:33.690 [2024-11-28 16:29:25.362971] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:33.690 [2024-11-28 16:29:25.362985] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:33.690 request: 00:16:33.690 { 00:16:33.690 "base_bdev": "BaseBdev1", 00:16:33.690 "raid_bdev": "raid_bdev1", 00:16:33.690 "method": "bdev_raid_add_base_bdev", 00:16:33.690 "req_id": 1 00:16:33.690 } 00:16:33.690 Got JSON-RPC error response 00:16:33.690 response: 00:16:33.690 { 00:16:33.690 "code": -22, 00:16:33.690 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:33.690 } 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:33.690 16:29:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:34.631 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:34.631 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.631 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.631 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.631 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.631 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:34.631 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.631 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.631 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.631 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.631 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.631 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.631 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.631 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.631 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.891 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.891 "name": "raid_bdev1", 00:16:34.891 "uuid": "1742b8d6-2e94-4161-8bfc-88b047de1448", 00:16:34.891 "strip_size_kb": 0, 00:16:34.891 "state": "online", 00:16:34.891 "raid_level": "raid1", 00:16:34.891 "superblock": true, 00:16:34.891 "num_base_bdevs": 2, 00:16:34.891 "num_base_bdevs_discovered": 1, 00:16:34.891 "num_base_bdevs_operational": 1, 00:16:34.891 "base_bdevs_list": [ 00:16:34.891 { 00:16:34.891 "name": null, 00:16:34.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.891 "is_configured": false, 00:16:34.891 "data_offset": 0, 00:16:34.891 "data_size": 7936 00:16:34.891 }, 00:16:34.891 { 00:16:34.891 "name": "BaseBdev2", 00:16:34.891 "uuid": "d6f1fbcc-1fdf-5841-8ea0-40200419c536", 00:16:34.891 "is_configured": true, 00:16:34.891 "data_offset": 256, 00:16:34.891 "data_size": 7936 00:16:34.891 } 00:16:34.891 ] 00:16:34.891 }' 00:16:34.891 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.891 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.152 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:35.152 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.152 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:35.152 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:35.152 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.152 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.152 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.152 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.152 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.152 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.152 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.152 "name": "raid_bdev1", 00:16:35.152 "uuid": "1742b8d6-2e94-4161-8bfc-88b047de1448", 00:16:35.152 "strip_size_kb": 0, 00:16:35.152 "state": "online", 00:16:35.152 "raid_level": "raid1", 00:16:35.152 "superblock": true, 00:16:35.152 "num_base_bdevs": 2, 00:16:35.152 "num_base_bdevs_discovered": 1, 00:16:35.152 "num_base_bdevs_operational": 1, 00:16:35.152 "base_bdevs_list": [ 00:16:35.152 { 00:16:35.152 "name": null, 00:16:35.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.152 "is_configured": false, 00:16:35.152 "data_offset": 0, 00:16:35.152 "data_size": 7936 00:16:35.152 }, 00:16:35.152 { 00:16:35.152 "name": "BaseBdev2", 00:16:35.152 "uuid": "d6f1fbcc-1fdf-5841-8ea0-40200419c536", 00:16:35.152 "is_configured": true, 00:16:35.152 "data_offset": 256, 00:16:35.152 "data_size": 7936 00:16:35.152 } 00:16:35.152 ] 00:16:35.152 }' 00:16:35.152 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.152 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:35.413 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.413 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:35.413 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 99319 00:16:35.413 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99319 ']' 00:16:35.413 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99319 00:16:35.413 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:35.413 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:35.413 16:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99319 00:16:35.413 killing process with pid 99319 00:16:35.413 Received shutdown signal, test time was about 60.000000 seconds 00:16:35.413 00:16:35.413 Latency(us) 00:16:35.413 [2024-11-28T16:29:27.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.413 [2024-11-28T16:29:27.184Z] =================================================================================================================== 00:16:35.413 [2024-11-28T16:29:27.184Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:35.413 16:29:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:35.413 16:29:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:35.413 16:29:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99319' 00:16:35.413 16:29:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 99319 00:16:35.414 16:29:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 99319 00:16:35.414 [2024-11-28 16:29:27.010430] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:35.414 [2024-11-28 16:29:27.010582] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:35.414 [2024-11-28 16:29:27.010660] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:35.414 [2024-11-28 16:29:27.010673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:35.414 [2024-11-28 16:29:27.043858] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:35.673 16:29:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:16:35.674 00:16:35.674 real 0m16.076s 00:16:35.674 user 0m21.449s 00:16:35.674 sys 0m1.639s 00:16:35.674 16:29:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:35.674 16:29:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.674 ************************************ 00:16:35.674 END TEST raid_rebuild_test_sb_md_interleaved 00:16:35.674 ************************************ 00:16:35.674 16:29:27 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:16:35.674 16:29:27 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:16:35.674 16:29:27 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 99319 ']' 00:16:35.674 16:29:27 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 99319 00:16:35.674 16:29:27 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:16:35.674 00:16:35.674 real 9m55.577s 00:16:35.674 user 14m4.490s 00:16:35.674 sys 1m49.172s 00:16:35.674 16:29:27 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:35.674 16:29:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:35.674 ************************************ 00:16:35.674 END TEST bdev_raid 00:16:35.674 ************************************ 00:16:35.674 16:29:27 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:35.674 16:29:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:35.674 16:29:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:35.674 16:29:27 -- common/autotest_common.sh@10 -- # set +x 00:16:35.934 ************************************ 00:16:35.934 START TEST spdkcli_raid 00:16:35.934 ************************************ 00:16:35.934 16:29:27 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:35.934 * Looking for test storage... 00:16:35.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:35.934 16:29:27 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:35.934 16:29:27 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:16:35.934 16:29:27 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:35.934 16:29:27 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:35.934 16:29:27 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:16:35.934 16:29:27 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:35.934 16:29:27 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:35.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.934 --rc genhtml_branch_coverage=1 00:16:35.934 --rc genhtml_function_coverage=1 00:16:35.934 --rc genhtml_legend=1 00:16:35.934 --rc geninfo_all_blocks=1 00:16:35.934 --rc geninfo_unexecuted_blocks=1 00:16:35.934 00:16:35.935 ' 00:16:35.935 16:29:27 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:35.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.935 --rc genhtml_branch_coverage=1 00:16:35.935 --rc genhtml_function_coverage=1 00:16:35.935 --rc genhtml_legend=1 00:16:35.935 --rc geninfo_all_blocks=1 00:16:35.935 --rc geninfo_unexecuted_blocks=1 00:16:35.935 00:16:35.935 ' 00:16:35.935 16:29:27 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:35.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.935 --rc genhtml_branch_coverage=1 00:16:35.935 --rc genhtml_function_coverage=1 00:16:35.935 --rc genhtml_legend=1 00:16:35.935 --rc geninfo_all_blocks=1 00:16:35.935 --rc geninfo_unexecuted_blocks=1 00:16:35.935 00:16:35.935 ' 00:16:35.935 16:29:27 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:35.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.935 --rc genhtml_branch_coverage=1 00:16:35.935 --rc genhtml_function_coverage=1 00:16:35.935 --rc genhtml_legend=1 00:16:35.935 --rc geninfo_all_blocks=1 00:16:35.935 --rc geninfo_unexecuted_blocks=1 00:16:35.935 00:16:35.935 ' 00:16:35.935 16:29:27 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:35.935 16:29:27 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:35.935 16:29:27 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:35.935 16:29:27 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:16:35.935 16:29:27 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:16:35.935 16:29:27 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:16:35.935 16:29:27 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:16:35.935 16:29:27 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:16:35.935 16:29:27 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:16:35.935 16:29:27 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:16:35.935 16:29:27 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:16:35.935 16:29:27 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:16:35.935 16:29:27 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:16:35.935 16:29:27 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:16:35.935 16:29:27 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:16:35.935 16:29:27 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:16:35.935 16:29:27 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:16:35.935 16:29:27 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:16:35.935 16:29:27 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:16:35.935 16:29:27 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:16:35.935 16:29:27 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:16:35.935 16:29:27 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:16:35.935 16:29:27 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:16:35.935 16:29:27 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:16:35.935 16:29:27 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:16:35.935 16:29:27 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:35.935 16:29:27 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:35.935 16:29:27 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:35.935 16:29:27 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:36.195 16:29:27 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:36.195 16:29:27 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:36.195 16:29:27 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:16:36.195 16:29:27 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:16:36.195 16:29:27 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:36.195 16:29:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:36.195 16:29:27 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:16:36.195 16:29:27 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=99989 00:16:36.195 16:29:27 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:16:36.195 16:29:27 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 99989 00:16:36.195 16:29:27 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 99989 ']' 00:16:36.195 16:29:27 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.195 16:29:27 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:36.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.195 16:29:27 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.195 16:29:27 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:36.195 16:29:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:36.195 [2024-11-28 16:29:27.814065] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:36.195 [2024-11-28 16:29:27.814180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99989 ] 00:16:36.455 [2024-11-28 16:29:27.977884] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:36.455 [2024-11-28 16:29:28.025946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.455 [2024-11-28 16:29:28.026010] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.048 16:29:28 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:37.048 16:29:28 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:16:37.048 16:29:28 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:16:37.048 16:29:28 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:37.048 16:29:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:37.048 16:29:28 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:16:37.048 16:29:28 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:37.048 16:29:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:37.048 16:29:28 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:16:37.048 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:16:37.048 ' 00:16:38.445 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:16:38.445 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:16:38.705 16:29:30 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:16:38.705 16:29:30 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:38.705 16:29:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:38.705 16:29:30 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:16:38.705 16:29:30 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:38.705 16:29:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:38.705 16:29:30 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:16:38.705 ' 00:16:39.645 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:16:39.904 16:29:31 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:16:39.904 16:29:31 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:39.904 16:29:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:39.904 16:29:31 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:16:39.904 16:29:31 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:39.904 16:29:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:39.904 16:29:31 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:16:39.904 16:29:31 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:16:40.474 16:29:32 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:16:40.474 16:29:32 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:16:40.474 16:29:32 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:16:40.474 16:29:32 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:40.474 16:29:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:40.474 16:29:32 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:16:40.474 16:29:32 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:40.474 16:29:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:40.474 16:29:32 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:16:40.474 ' 00:16:41.412 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:16:41.670 16:29:33 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:16:41.670 16:29:33 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:41.670 16:29:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:41.670 16:29:33 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:16:41.670 16:29:33 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:41.670 16:29:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:41.670 16:29:33 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:16:41.670 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:16:41.670 ' 00:16:43.051 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:16:43.051 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:16:43.051 16:29:34 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:16:43.051 16:29:34 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:43.051 16:29:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:43.051 16:29:34 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 99989 00:16:43.051 16:29:34 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 99989 ']' 00:16:43.051 16:29:34 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 99989 00:16:43.051 16:29:34 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:16:43.051 16:29:34 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:43.051 16:29:34 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99989 00:16:43.051 killing process with pid 99989 00:16:43.052 16:29:34 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:43.052 16:29:34 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:43.052 16:29:34 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99989' 00:16:43.052 16:29:34 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 99989 00:16:43.052 16:29:34 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 99989 00:16:43.621 16:29:35 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:16:43.621 16:29:35 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 99989 ']' 00:16:43.621 Process with pid 99989 is not found 00:16:43.621 16:29:35 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 99989 00:16:43.621 16:29:35 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 99989 ']' 00:16:43.621 16:29:35 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 99989 00:16:43.621 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (99989) - No such process 00:16:43.622 16:29:35 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 99989 is not found' 00:16:43.622 16:29:35 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:16:43.622 16:29:35 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:16:43.622 16:29:35 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:16:43.622 16:29:35 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:16:43.622 00:16:43.622 real 0m7.767s 00:16:43.622 user 0m16.318s 00:16:43.622 sys 0m1.152s 00:16:43.622 16:29:35 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:43.622 16:29:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:43.622 ************************************ 00:16:43.622 END TEST spdkcli_raid 00:16:43.622 ************************************ 00:16:43.622 16:29:35 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:43.622 16:29:35 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:43.622 16:29:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:43.622 16:29:35 -- common/autotest_common.sh@10 -- # set +x 00:16:43.622 ************************************ 00:16:43.622 START TEST blockdev_raid5f 00:16:43.622 ************************************ 00:16:43.622 16:29:35 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:43.882 * Looking for test storage... 00:16:43.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:43.882 16:29:35 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:43.882 16:29:35 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:16:43.882 16:29:35 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:43.882 16:29:35 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:43.882 16:29:35 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:16:43.882 16:29:35 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:43.882 16:29:35 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:43.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.882 --rc genhtml_branch_coverage=1 00:16:43.882 --rc genhtml_function_coverage=1 00:16:43.882 --rc genhtml_legend=1 00:16:43.882 --rc geninfo_all_blocks=1 00:16:43.882 --rc geninfo_unexecuted_blocks=1 00:16:43.882 00:16:43.882 ' 00:16:43.882 16:29:35 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:43.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.882 --rc genhtml_branch_coverage=1 00:16:43.882 --rc genhtml_function_coverage=1 00:16:43.882 --rc genhtml_legend=1 00:16:43.882 --rc geninfo_all_blocks=1 00:16:43.882 --rc geninfo_unexecuted_blocks=1 00:16:43.882 00:16:43.882 ' 00:16:43.882 16:29:35 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:43.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.882 --rc genhtml_branch_coverage=1 00:16:43.882 --rc genhtml_function_coverage=1 00:16:43.882 --rc genhtml_legend=1 00:16:43.882 --rc geninfo_all_blocks=1 00:16:43.882 --rc geninfo_unexecuted_blocks=1 00:16:43.882 00:16:43.882 ' 00:16:43.882 16:29:35 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:43.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.882 --rc genhtml_branch_coverage=1 00:16:43.882 --rc genhtml_function_coverage=1 00:16:43.882 --rc genhtml_legend=1 00:16:43.882 --rc geninfo_all_blocks=1 00:16:43.882 --rc geninfo_unexecuted_blocks=1 00:16:43.882 00:16:43.882 ' 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=100253 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:43.882 16:29:35 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 100253 00:16:43.882 16:29:35 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 100253 ']' 00:16:43.882 16:29:35 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.882 16:29:35 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:43.882 16:29:35 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.882 16:29:35 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:43.882 16:29:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:43.882 [2024-11-28 16:29:35.621808] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:43.883 [2024-11-28 16:29:35.622045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100253 ] 00:16:44.143 [2024-11-28 16:29:35.782441] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.143 [2024-11-28 16:29:35.829748] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.713 16:29:36 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:44.713 16:29:36 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:16:44.713 16:29:36 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:16:44.713 16:29:36 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:16:44.713 16:29:36 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:16:44.713 16:29:36 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.713 16:29:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:44.713 Malloc0 00:16:44.713 Malloc1 00:16:44.713 Malloc2 00:16:44.713 16:29:36 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.713 16:29:36 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:16:44.713 16:29:36 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.713 16:29:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:44.713 16:29:36 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.713 16:29:36 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:16:44.973 16:29:36 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:16:44.973 16:29:36 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.973 16:29:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:44.973 16:29:36 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.973 16:29:36 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:16:44.973 16:29:36 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.973 16:29:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:44.973 16:29:36 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.973 16:29:36 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:44.973 16:29:36 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.973 16:29:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:44.973 16:29:36 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.973 16:29:36 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:16:44.973 16:29:36 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:16:44.973 16:29:36 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:16:44.973 16:29:36 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.973 16:29:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:44.973 16:29:36 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.973 16:29:36 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:16:44.973 16:29:36 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:16:44.973 16:29:36 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "523d64db-c327-417a-bd76-9341c36b47fc"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "523d64db-c327-417a-bd76-9341c36b47fc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "523d64db-c327-417a-bd76-9341c36b47fc",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "3d72c9f4-9fa1-4425-bc3e-0908efae6662",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "68236b11-f1ab-4be1-ab72-bd625b6f96cf",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "a0f87431-c01b-4605-8fea-86611198644a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:16:44.973 16:29:36 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:16:44.974 16:29:36 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:16:44.974 16:29:36 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:16:44.974 16:29:36 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 100253 00:16:44.974 16:29:36 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 100253 ']' 00:16:44.974 16:29:36 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 100253 00:16:44.974 16:29:36 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:16:44.974 16:29:36 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:44.974 16:29:36 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100253 00:16:44.974 killing process with pid 100253 00:16:44.974 16:29:36 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:44.974 16:29:36 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:44.974 16:29:36 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100253' 00:16:44.974 16:29:36 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 100253 00:16:44.974 16:29:36 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 100253 00:16:45.545 16:29:37 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:45.545 16:29:37 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:45.545 16:29:37 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:45.545 16:29:37 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:45.545 16:29:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:45.545 ************************************ 00:16:45.545 START TEST bdev_hello_world 00:16:45.545 ************************************ 00:16:45.545 16:29:37 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:45.545 [2024-11-28 16:29:37.219653] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:45.545 [2024-11-28 16:29:37.219775] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100291 ] 00:16:45.805 [2024-11-28 16:29:37.381681] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.805 [2024-11-28 16:29:37.433351] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.065 [2024-11-28 16:29:37.629304] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:46.065 [2024-11-28 16:29:37.629359] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:16:46.065 [2024-11-28 16:29:37.629381] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:46.065 [2024-11-28 16:29:37.629671] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:46.065 [2024-11-28 16:29:37.629786] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:46.065 [2024-11-28 16:29:37.629800] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:46.065 [2024-11-28 16:29:37.629863] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:46.065 00:16:46.065 [2024-11-28 16:29:37.629907] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:46.326 00:16:46.326 real 0m0.734s 00:16:46.326 user 0m0.385s 00:16:46.326 sys 0m0.234s 00:16:46.326 16:29:37 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:46.326 16:29:37 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:46.326 ************************************ 00:16:46.326 END TEST bdev_hello_world 00:16:46.326 ************************************ 00:16:46.326 16:29:37 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:16:46.326 16:29:37 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:46.326 16:29:37 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:46.326 16:29:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:46.326 ************************************ 00:16:46.326 START TEST bdev_bounds 00:16:46.326 ************************************ 00:16:46.326 16:29:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:16:46.326 16:29:37 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=100318 00:16:46.326 Process bdevio pid: 100318 00:16:46.326 16:29:37 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:46.326 16:29:37 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:46.326 16:29:37 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 100318' 00:16:46.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.326 16:29:37 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 100318 00:16:46.326 16:29:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 100318 ']' 00:16:46.326 16:29:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.326 16:29:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:46.326 16:29:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.326 16:29:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:46.326 16:29:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:46.326 [2024-11-28 16:29:38.039488] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:46.326 [2024-11-28 16:29:38.039717] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100318 ] 00:16:46.586 [2024-11-28 16:29:38.200522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:46.586 [2024-11-28 16:29:38.249964] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.586 [2024-11-28 16:29:38.250152] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.586 [2024-11-28 16:29:38.250047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.157 16:29:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:47.157 16:29:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:16:47.157 16:29:38 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:47.417 I/O targets: 00:16:47.417 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:16:47.417 00:16:47.417 00:16:47.417 CUnit - A unit testing framework for C - Version 2.1-3 00:16:47.417 http://cunit.sourceforge.net/ 00:16:47.417 00:16:47.417 00:16:47.417 Suite: bdevio tests on: raid5f 00:16:47.417 Test: blockdev write read block ...passed 00:16:47.417 Test: blockdev write zeroes read block ...passed 00:16:47.417 Test: blockdev write zeroes read no split ...passed 00:16:47.417 Test: blockdev write zeroes read split ...passed 00:16:47.417 Test: blockdev write zeroes read split partial ...passed 00:16:47.417 Test: blockdev reset ...passed 00:16:47.417 Test: blockdev write read 8 blocks ...passed 00:16:47.417 Test: blockdev write read size > 128k ...passed 00:16:47.417 Test: blockdev write read invalid size ...passed 00:16:47.417 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:47.417 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:47.417 Test: blockdev write read max offset ...passed 00:16:47.417 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:47.417 Test: blockdev writev readv 8 blocks ...passed 00:16:47.417 Test: blockdev writev readv 30 x 1block ...passed 00:16:47.417 Test: blockdev writev readv block ...passed 00:16:47.417 Test: blockdev writev readv size > 128k ...passed 00:16:47.417 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:47.417 Test: blockdev comparev and writev ...passed 00:16:47.417 Test: blockdev nvme passthru rw ...passed 00:16:47.417 Test: blockdev nvme passthru vendor specific ...passed 00:16:47.417 Test: blockdev nvme admin passthru ...passed 00:16:47.417 Test: blockdev copy ...passed 00:16:47.417 00:16:47.417 Run Summary: Type Total Ran Passed Failed Inactive 00:16:47.417 suites 1 1 n/a 0 0 00:16:47.417 tests 23 23 23 0 0 00:16:47.417 asserts 130 130 130 0 n/a 00:16:47.417 00:16:47.417 Elapsed time = 0.309 seconds 00:16:47.417 0 00:16:47.417 16:29:39 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 100318 00:16:47.417 16:29:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 100318 ']' 00:16:47.417 16:29:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 100318 00:16:47.417 16:29:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:16:47.417 16:29:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:47.417 16:29:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100318 00:16:47.417 killing process with pid 100318 00:16:47.417 16:29:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:47.417 16:29:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:47.417 16:29:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100318' 00:16:47.417 16:29:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 100318 00:16:47.417 16:29:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 100318 00:16:47.677 16:29:39 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:47.677 00:16:47.677 real 0m1.444s 00:16:47.677 user 0m3.424s 00:16:47.677 sys 0m0.333s 00:16:47.677 16:29:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:47.677 ************************************ 00:16:47.677 END TEST bdev_bounds 00:16:47.677 ************************************ 00:16:47.677 16:29:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:47.938 16:29:39 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:47.938 16:29:39 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:47.938 16:29:39 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:47.938 16:29:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:47.938 ************************************ 00:16:47.938 START TEST bdev_nbd 00:16:47.938 ************************************ 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=100373 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 100373 /var/tmp/spdk-nbd.sock 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 100373 ']' 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:47.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:47.938 16:29:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:47.938 [2024-11-28 16:29:39.570200] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:47.938 [2024-11-28 16:29:39.570320] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.198 [2024-11-28 16:29:39.740456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.198 [2024-11-28 16:29:39.787755] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.770 16:29:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:48.770 16:29:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:16:48.770 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:16:48.770 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:48.770 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:16:48.770 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:48.770 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:16:48.770 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:48.770 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:16:48.770 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:48.770 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:48.770 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:48.770 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:48.770 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:48.770 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:16:49.031 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:49.031 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:49.031 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:49.031 16:29:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:49.031 16:29:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:49.031 16:29:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:49.031 16:29:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:49.031 16:29:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:49.031 16:29:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:49.031 16:29:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:49.031 16:29:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:49.031 16:29:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:49.031 1+0 records in 00:16:49.031 1+0 records out 00:16:49.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522717 s, 7.8 MB/s 00:16:49.031 16:29:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.031 16:29:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:49.031 16:29:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.031 16:29:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:49.031 16:29:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:49.031 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:49.031 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:49.031 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:49.291 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:49.291 { 00:16:49.291 "nbd_device": "/dev/nbd0", 00:16:49.291 "bdev_name": "raid5f" 00:16:49.291 } 00:16:49.291 ]' 00:16:49.291 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:49.291 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:49.291 { 00:16:49.291 "nbd_device": "/dev/nbd0", 00:16:49.291 "bdev_name": "raid5f" 00:16:49.291 } 00:16:49.291 ]' 00:16:49.291 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:49.291 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:49.291 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:49.291 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:49.291 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:49.291 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:49.291 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.291 16:29:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:49.552 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:49.552 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:49.552 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:49.552 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:49.552 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:49.552 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:49.552 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:49.552 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:49.552 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:49.552 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:49.552 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:49.552 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:49.552 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:49.552 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:49.813 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:49.813 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:49.813 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:49.814 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:49.814 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:49.814 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:49.814 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:49.814 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:49.814 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:49.814 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:16:49.814 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:49.814 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:16:49.814 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:49.814 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:16:49.814 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:49.814 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:16:49.814 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:49.814 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:16:49.814 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:49.814 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:49.814 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:49.814 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:49.814 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:49.814 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:49.814 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:16:49.814 /dev/nbd0 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:50.075 1+0 records in 00:16:50.075 1+0 records out 00:16:50.075 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394272 s, 10.4 MB/s 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:50.075 { 00:16:50.075 "nbd_device": "/dev/nbd0", 00:16:50.075 "bdev_name": "raid5f" 00:16:50.075 } 00:16:50.075 ]' 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:50.075 { 00:16:50.075 "nbd_device": "/dev/nbd0", 00:16:50.075 "bdev_name": "raid5f" 00:16:50.075 } 00:16:50.075 ]' 00:16:50.075 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:50.336 256+0 records in 00:16:50.336 256+0 records out 00:16:50.336 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124475 s, 84.2 MB/s 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:50.336 256+0 records in 00:16:50.336 256+0 records out 00:16:50.336 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0296464 s, 35.4 MB/s 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:50.336 16:29:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:50.597 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:50.597 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:50.597 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:50.597 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:50.597 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:50.597 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:50.597 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:50.597 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:50.597 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:50.597 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:50.597 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:50.857 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:50.857 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:50.857 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:50.857 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:50.857 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:50.857 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:50.857 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:50.857 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:50.857 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:50.857 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:50.857 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:50.857 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:50.857 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:50.857 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:50.857 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:50.857 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:51.117 malloc_lvol_verify 00:16:51.117 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:51.117 9dceb966-5b4f-4eea-af72-c1c4b95f384e 00:16:51.117 16:29:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:51.377 9d00548c-e9a1-468a-8ffe-c3863414a818 00:16:51.377 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:51.638 /dev/nbd0 00:16:51.638 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:51.638 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:51.638 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:51.638 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:51.638 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:51.638 mke2fs 1.47.0 (5-Feb-2023) 00:16:51.638 Discarding device blocks: 0/4096 done 00:16:51.638 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:51.638 00:16:51.638 Allocating group tables: 0/1 done 00:16:51.638 Writing inode tables: 0/1 done 00:16:51.638 Creating journal (1024 blocks): done 00:16:51.638 Writing superblocks and filesystem accounting information: 0/1 done 00:16:51.638 00:16:51.638 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:51.638 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:51.638 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:51.638 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:51.638 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:51.638 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:51.638 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:51.899 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:51.899 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:51.899 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:51.899 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:51.899 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:51.899 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:51.899 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:51.899 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:51.899 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 100373 00:16:51.899 16:29:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 100373 ']' 00:16:51.899 16:29:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 100373 00:16:51.899 16:29:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:16:51.899 16:29:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:51.899 16:29:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100373 00:16:51.899 16:29:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:51.899 16:29:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:51.899 killing process with pid 100373 00:16:51.899 16:29:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100373' 00:16:51.899 16:29:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 100373 00:16:51.899 16:29:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 100373 00:16:52.160 16:29:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:52.160 00:16:52.160 real 0m4.324s 00:16:52.160 user 0m6.186s 00:16:52.160 sys 0m1.345s 00:16:52.160 16:29:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:52.160 16:29:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:52.160 ************************************ 00:16:52.160 END TEST bdev_nbd 00:16:52.160 ************************************ 00:16:52.160 16:29:43 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:16:52.160 16:29:43 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:16:52.161 16:29:43 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:16:52.161 16:29:43 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:16:52.161 16:29:43 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:52.161 16:29:43 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:52.161 16:29:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:52.161 ************************************ 00:16:52.161 START TEST bdev_fio 00:16:52.161 ************************************ 00:16:52.161 16:29:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:16:52.161 16:29:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:52.161 16:29:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:52.161 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:52.161 16:29:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:52.161 16:29:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:52.161 16:29:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:52.161 16:29:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:52.161 16:29:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:52.161 16:29:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:52.161 16:29:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:16:52.161 16:29:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:16:52.161 16:29:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:16:52.161 16:29:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:16:52.161 16:29:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:52.161 16:29:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:16:52.161 16:29:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:16:52.161 16:29:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:52.161 16:29:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:16:52.161 16:29:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:16:52.161 16:29:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:16:52.161 16:29:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:16:52.161 16:29:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:16:52.422 16:29:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:52.422 16:29:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:16:52.422 16:29:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:52.422 16:29:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:16:52.422 16:29:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:16:52.422 16:29:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:52.422 16:29:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:52.422 16:29:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:16:52.422 16:29:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:52.422 16:29:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:52.422 ************************************ 00:16:52.422 START TEST bdev_fio_rw_verify 00:16:52.422 ************************************ 00:16:52.422 16:29:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:52.422 16:29:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:52.422 16:29:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:52.422 16:29:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:52.422 16:29:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:52.422 16:29:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:52.422 16:29:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:16:52.422 16:29:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:52.422 16:29:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:52.422 16:29:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:52.422 16:29:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:16:52.422 16:29:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:52.422 16:29:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:52.422 16:29:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:52.422 16:29:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:16:52.422 16:29:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:52.422 16:29:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:52.683 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:52.683 fio-3.35 00:16:52.683 Starting 1 thread 00:17:04.921 00:17:04.921 job_raid5f: (groupid=0, jobs=1): err= 0: pid=100553: Thu Nov 28 16:29:54 2024 00:17:04.921 read: IOPS=12.7k, BW=49.7MiB/s (52.1MB/s)(497MiB/10001msec) 00:17:04.921 slat (usec): min=16, max=251, avg=18.36, stdev= 2.05 00:17:04.921 clat (usec): min=10, max=473, avg=125.90, stdev=43.75 00:17:04.921 lat (usec): min=28, max=491, avg=144.26, stdev=43.97 00:17:04.921 clat percentiles (usec): 00:17:04.921 | 50.000th=[ 130], 99.000th=[ 208], 99.900th=[ 235], 99.990th=[ 347], 00:17:04.921 | 99.999th=[ 457] 00:17:04.921 write: IOPS=13.3k, BW=52.0MiB/s (54.5MB/s)(514MiB/9879msec); 0 zone resets 00:17:04.921 slat (usec): min=7, max=251, avg=16.08, stdev= 4.15 00:17:04.921 clat (usec): min=57, max=1830, avg=289.32, stdev=50.74 00:17:04.921 lat (usec): min=72, max=1944, avg=305.39, stdev=52.61 00:17:04.921 clat percentiles (usec): 00:17:04.921 | 50.000th=[ 293], 99.000th=[ 375], 99.900th=[ 898], 99.990th=[ 1483], 00:17:04.921 | 99.999th=[ 1811] 00:17:04.921 bw ( KiB/s): min=49496, max=55464, per=98.98%, avg=52683.79, stdev=1684.95, samples=19 00:17:04.921 iops : min=12374, max=13866, avg=13170.95, stdev=421.24, samples=19 00:17:04.921 lat (usec) : 20=0.01%, 50=0.01%, 100=16.98%, 250=41.16%, 500=41.68% 00:17:04.921 lat (usec) : 750=0.10%, 1000=0.04% 00:17:04.921 lat (msec) : 2=0.04% 00:17:04.921 cpu : usr=98.71%, sys=0.56%, ctx=28, majf=0, minf=13448 00:17:04.921 IO depths : 1=7.6%, 2=19.9%, 4=55.1%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:04.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.921 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.921 issued rwts: total=127244,131461,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:04.921 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:04.921 00:17:04.921 Run status group 0 (all jobs): 00:17:04.921 READ: bw=49.7MiB/s (52.1MB/s), 49.7MiB/s-49.7MiB/s (52.1MB/s-52.1MB/s), io=497MiB (521MB), run=10001-10001msec 00:17:04.921 WRITE: bw=52.0MiB/s (54.5MB/s), 52.0MiB/s-52.0MiB/s (54.5MB/s-54.5MB/s), io=514MiB (538MB), run=9879-9879msec 00:17:04.921 ----------------------------------------------------- 00:17:04.921 Suppressions used: 00:17:04.921 count bytes template 00:17:04.921 1 7 /usr/src/fio/parse.c 00:17:04.921 274 26304 /usr/src/fio/iolog.c 00:17:04.921 1 8 libtcmalloc_minimal.so 00:17:04.921 1 904 libcrypto.so 00:17:04.921 ----------------------------------------------------- 00:17:04.921 00:17:04.922 ************************************ 00:17:04.922 END TEST bdev_fio_rw_verify 00:17:04.922 ************************************ 00:17:04.922 00:17:04.922 real 0m11.233s 00:17:04.922 user 0m11.546s 00:17:04.922 sys 0m0.663s 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "523d64db-c327-417a-bd76-9341c36b47fc"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "523d64db-c327-417a-bd76-9341c36b47fc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "523d64db-c327-417a-bd76-9341c36b47fc",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "3d72c9f4-9fa1-4425-bc3e-0908efae6662",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "68236b11-f1ab-4be1-ab72-bd625b6f96cf",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "a0f87431-c01b-4605-8fea-86611198644a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:04.922 /home/vagrant/spdk_repo/spdk 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:04.922 00:17:04.922 real 0m11.504s 00:17:04.922 user 0m11.669s 00:17:04.922 sys 0m0.790s 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:04.922 16:29:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:04.922 ************************************ 00:17:04.922 END TEST bdev_fio 00:17:04.922 ************************************ 00:17:04.922 16:29:55 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:04.922 16:29:55 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:04.922 16:29:55 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:04.922 16:29:55 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:04.922 16:29:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:04.922 ************************************ 00:17:04.922 START TEST bdev_verify 00:17:04.922 ************************************ 00:17:04.922 16:29:55 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:04.922 [2024-11-28 16:29:55.544382] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:04.922 [2024-11-28 16:29:55.544521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100706 ] 00:17:04.922 [2024-11-28 16:29:55.709751] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:04.922 [2024-11-28 16:29:55.761224] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.922 [2024-11-28 16:29:55.761322] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.922 Running I/O for 5 seconds... 00:17:06.434 10957.00 IOPS, 42.80 MiB/s [2024-11-28T16:29:59.145Z] 11068.00 IOPS, 43.23 MiB/s [2024-11-28T16:30:00.084Z] 11148.00 IOPS, 43.55 MiB/s [2024-11-28T16:30:01.024Z] 11129.50 IOPS, 43.47 MiB/s [2024-11-28T16:30:01.024Z] 11139.40 IOPS, 43.51 MiB/s 00:17:09.253 Latency(us) 00:17:09.253 [2024-11-28T16:30:01.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.253 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:09.253 Verification LBA range: start 0x0 length 0x2000 00:17:09.253 raid5f : 5.02 4526.77 17.68 0.00 0.00 42465.27 282.61 29992.02 00:17:09.253 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:09.253 Verification LBA range: start 0x2000 length 0x2000 00:17:09.253 raid5f : 5.02 6614.52 25.84 0.00 0.00 29086.21 97.03 21635.47 00:17:09.253 [2024-11-28T16:30:01.024Z] =================================================================================================================== 00:17:09.253 [2024-11-28T16:30:01.024Z] Total : 11141.29 43.52 0.00 0.00 34520.28 97.03 29992.02 00:17:09.512 00:17:09.512 real 0m5.788s 00:17:09.512 user 0m10.706s 00:17:09.512 sys 0m0.267s 00:17:09.512 16:30:01 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:09.512 16:30:01 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:09.512 ************************************ 00:17:09.512 END TEST bdev_verify 00:17:09.512 ************************************ 00:17:09.772 16:30:01 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:09.772 16:30:01 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:09.772 16:30:01 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:09.772 16:30:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:09.772 ************************************ 00:17:09.772 START TEST bdev_verify_big_io 00:17:09.772 ************************************ 00:17:09.772 16:30:01 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:09.772 [2024-11-28 16:30:01.400238] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:09.772 [2024-11-28 16:30:01.400397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100788 ] 00:17:10.032 [2024-11-28 16:30:01.561880] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:10.032 [2024-11-28 16:30:01.612544] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.032 [2024-11-28 16:30:01.612643] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.293 Running I/O for 5 seconds... 00:17:12.202 633.00 IOPS, 39.56 MiB/s [2024-11-28T16:30:04.914Z] 761.00 IOPS, 47.56 MiB/s [2024-11-28T16:30:06.297Z] 782.00 IOPS, 48.88 MiB/s [2024-11-28T16:30:06.868Z] 793.25 IOPS, 49.58 MiB/s [2024-11-28T16:30:07.128Z] 812.40 IOPS, 50.77 MiB/s 00:17:15.357 Latency(us) 00:17:15.357 [2024-11-28T16:30:07.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.357 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:15.357 Verification LBA range: start 0x0 length 0x200 00:17:15.357 raid5f : 5.32 357.90 22.37 0.00 0.00 8864184.38 201.22 373641.06 00:17:15.357 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:15.357 Verification LBA range: start 0x200 length 0x200 00:17:15.357 raid5f : 5.31 454.83 28.43 0.00 0.00 7030637.77 305.86 305872.82 00:17:15.357 [2024-11-28T16:30:07.128Z] =================================================================================================================== 00:17:15.357 [2024-11-28T16:30:07.128Z] Total : 812.73 50.80 0.00 0.00 7839555.39 201.22 373641.06 00:17:15.617 00:17:15.617 real 0m6.070s 00:17:15.617 user 0m11.301s 00:17:15.617 sys 0m0.245s 00:17:15.617 16:30:07 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:15.617 16:30:07 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.617 ************************************ 00:17:15.617 END TEST bdev_verify_big_io 00:17:15.617 ************************************ 00:17:15.878 16:30:07 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:15.878 16:30:07 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:15.878 16:30:07 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:15.878 16:30:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:15.878 ************************************ 00:17:15.878 START TEST bdev_write_zeroes 00:17:15.878 ************************************ 00:17:15.878 16:30:07 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:15.878 [2024-11-28 16:30:07.543505] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:15.878 [2024-11-28 16:30:07.543630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100870 ] 00:17:16.138 [2024-11-28 16:30:07.702519] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.138 [2024-11-28 16:30:07.755813] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.398 Running I/O for 1 seconds... 00:17:17.339 30423.00 IOPS, 118.84 MiB/s 00:17:17.339 Latency(us) 00:17:17.339 [2024-11-28T16:30:09.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.339 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:17.339 raid5f : 1.01 30400.14 118.75 0.00 0.00 4199.70 1380.83 5723.67 00:17:17.339 [2024-11-28T16:30:09.110Z] =================================================================================================================== 00:17:17.339 [2024-11-28T16:30:09.110Z] Total : 30400.14 118.75 0.00 0.00 4199.70 1380.83 5723.67 00:17:17.599 00:17:17.599 real 0m1.759s 00:17:17.599 user 0m1.389s 00:17:17.599 sys 0m0.250s 00:17:17.599 16:30:09 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:17.599 16:30:09 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:17.599 ************************************ 00:17:17.599 END TEST bdev_write_zeroes 00:17:17.599 ************************************ 00:17:17.599 16:30:09 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:17.599 16:30:09 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:17.599 16:30:09 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:17.599 16:30:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:17.599 ************************************ 00:17:17.599 START TEST bdev_json_nonenclosed 00:17:17.599 ************************************ 00:17:17.599 16:30:09 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:17.859 [2024-11-28 16:30:09.383646] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:17.859 [2024-11-28 16:30:09.384163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100906 ] 00:17:17.859 [2024-11-28 16:30:09.544733] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.859 [2024-11-28 16:30:09.596330] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.860 [2024-11-28 16:30:09.596429] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:17.860 [2024-11-28 16:30:09.596455] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:17.860 [2024-11-28 16:30:09.596474] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:18.120 00:17:18.120 real 0m0.418s 00:17:18.120 user 0m0.175s 00:17:18.120 sys 0m0.138s 00:17:18.120 16:30:09 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:18.120 16:30:09 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:18.120 ************************************ 00:17:18.120 END TEST bdev_json_nonenclosed 00:17:18.120 ************************************ 00:17:18.120 16:30:09 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:18.120 16:30:09 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:18.120 16:30:09 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:18.120 16:30:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:18.120 ************************************ 00:17:18.120 START TEST bdev_json_nonarray 00:17:18.120 ************************************ 00:17:18.120 16:30:09 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:18.120 [2024-11-28 16:30:09.878397] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:18.120 [2024-11-28 16:30:09.878545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100932 ] 00:17:18.380 [2024-11-28 16:30:10.045358] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.381 [2024-11-28 16:30:10.089458] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.381 [2024-11-28 16:30:10.089568] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:18.381 [2024-11-28 16:30:10.089598] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:18.381 [2024-11-28 16:30:10.089616] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:18.641 00:17:18.641 real 0m0.420s 00:17:18.641 user 0m0.180s 00:17:18.641 sys 0m0.136s 00:17:18.641 16:30:10 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:18.641 16:30:10 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:18.641 ************************************ 00:17:18.641 END TEST bdev_json_nonarray 00:17:18.641 ************************************ 00:17:18.641 16:30:10 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:17:18.641 16:30:10 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:17:18.641 16:30:10 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:17:18.641 16:30:10 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:17:18.641 16:30:10 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:17:18.641 16:30:10 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:18.641 16:30:10 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:18.641 16:30:10 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:17:18.641 16:30:10 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:17:18.641 16:30:10 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:17:18.641 16:30:10 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:17:18.641 00:17:18.641 real 0m34.992s 00:17:18.641 user 0m47.375s 00:17:18.641 sys 0m4.803s 00:17:18.641 16:30:10 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:18.641 16:30:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:18.641 ************************************ 00:17:18.641 END TEST blockdev_raid5f 00:17:18.641 ************************************ 00:17:18.641 16:30:10 -- spdk/autotest.sh@194 -- # uname -s 00:17:18.641 16:30:10 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:17:18.641 16:30:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:18.641 16:30:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:18.641 16:30:10 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:17:18.641 16:30:10 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:17:18.641 16:30:10 -- spdk/autotest.sh@256 -- # timing_exit lib 00:17:18.641 16:30:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:18.641 16:30:10 -- common/autotest_common.sh@10 -- # set +x 00:17:18.641 16:30:10 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:17:18.641 16:30:10 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:17:18.641 16:30:10 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:17:18.641 16:30:10 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:17:18.641 16:30:10 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:18.641 16:30:10 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:18.641 16:30:10 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:17:18.641 16:30:10 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:17:18.641 16:30:10 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:17:18.641 16:30:10 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:18.641 16:30:10 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:17:18.641 16:30:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:17:18.641 16:30:10 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:17:18.641 16:30:10 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:17:18.641 16:30:10 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:17:18.641 16:30:10 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:17:18.641 16:30:10 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:17:18.641 16:30:10 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:17:18.641 16:30:10 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:17:18.641 16:30:10 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:17:18.641 16:30:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:18.641 16:30:10 -- common/autotest_common.sh@10 -- # set +x 00:17:18.902 16:30:10 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:17:18.902 16:30:10 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:17:18.902 16:30:10 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:17:18.902 16:30:10 -- common/autotest_common.sh@10 -- # set +x 00:17:20.812 INFO: APP EXITING 00:17:20.812 INFO: killing all VMs 00:17:20.812 INFO: killing vhost app 00:17:20.812 INFO: EXIT DONE 00:17:21.382 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:21.382 Waiting for block devices as requested 00:17:21.642 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:21.642 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:22.583 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:22.583 Cleaning 00:17:22.583 Removing: /var/run/dpdk/spdk0/config 00:17:22.583 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:17:22.583 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:17:22.583 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:17:22.583 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:17:22.583 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:17:22.583 Removing: /var/run/dpdk/spdk0/hugepage_info 00:17:22.583 Removing: /dev/shm/spdk_tgt_trace.pid69164 00:17:22.583 Removing: /var/run/dpdk/spdk0 00:17:22.583 Removing: /var/run/dpdk/spdk_pid100253 00:17:22.583 Removing: /var/run/dpdk/spdk_pid100291 00:17:22.583 Removing: /var/run/dpdk/spdk_pid100318 00:17:22.583 Removing: /var/run/dpdk/spdk_pid100544 00:17:22.583 Removing: /var/run/dpdk/spdk_pid100706 00:17:22.844 Removing: /var/run/dpdk/spdk_pid100788 00:17:22.844 Removing: /var/run/dpdk/spdk_pid100870 00:17:22.844 Removing: /var/run/dpdk/spdk_pid100906 00:17:22.844 Removing: /var/run/dpdk/spdk_pid100932 00:17:22.844 Removing: /var/run/dpdk/spdk_pid68995 00:17:22.844 Removing: /var/run/dpdk/spdk_pid69164 00:17:22.844 Removing: /var/run/dpdk/spdk_pid69366 00:17:22.844 Removing: /var/run/dpdk/spdk_pid69453 00:17:22.844 Removing: /var/run/dpdk/spdk_pid69482 00:17:22.844 Removing: /var/run/dpdk/spdk_pid69592 00:17:22.844 Removing: /var/run/dpdk/spdk_pid69606 00:17:22.844 Removing: /var/run/dpdk/spdk_pid69794 00:17:22.844 Removing: /var/run/dpdk/spdk_pid69873 00:17:22.844 Removing: /var/run/dpdk/spdk_pid69958 00:17:22.844 Removing: /var/run/dpdk/spdk_pid70047 00:17:22.844 Removing: /var/run/dpdk/spdk_pid70133 00:17:22.844 Removing: /var/run/dpdk/spdk_pid70176 00:17:22.844 Removing: /var/run/dpdk/spdk_pid70209 00:17:22.844 Removing: /var/run/dpdk/spdk_pid70280 00:17:22.844 Removing: /var/run/dpdk/spdk_pid70390 00:17:22.844 Removing: /var/run/dpdk/spdk_pid70811 00:17:22.844 Removing: /var/run/dpdk/spdk_pid70858 00:17:22.844 Removing: /var/run/dpdk/spdk_pid70905 00:17:22.844 Removing: /var/run/dpdk/spdk_pid70921 00:17:22.844 Removing: /var/run/dpdk/spdk_pid70992 00:17:22.844 Removing: /var/run/dpdk/spdk_pid71002 00:17:22.844 Removing: /var/run/dpdk/spdk_pid71066 00:17:22.844 Removing: /var/run/dpdk/spdk_pid71082 00:17:22.844 Removing: /var/run/dpdk/spdk_pid71129 00:17:22.844 Removing: /var/run/dpdk/spdk_pid71142 00:17:22.844 Removing: /var/run/dpdk/spdk_pid71189 00:17:22.844 Removing: /var/run/dpdk/spdk_pid71202 00:17:22.844 Removing: /var/run/dpdk/spdk_pid71340 00:17:22.844 Removing: /var/run/dpdk/spdk_pid71371 00:17:22.844 Removing: /var/run/dpdk/spdk_pid71460 00:17:22.844 Removing: /var/run/dpdk/spdk_pid72630 00:17:22.844 Removing: /var/run/dpdk/spdk_pid72825 00:17:22.844 Removing: /var/run/dpdk/spdk_pid72954 00:17:22.844 Removing: /var/run/dpdk/spdk_pid73564 00:17:22.844 Removing: /var/run/dpdk/spdk_pid73759 00:17:22.844 Removing: /var/run/dpdk/spdk_pid73888 00:17:22.844 Removing: /var/run/dpdk/spdk_pid74493 00:17:22.844 Removing: /var/run/dpdk/spdk_pid74812 00:17:22.844 Removing: /var/run/dpdk/spdk_pid74941 00:17:22.844 Removing: /var/run/dpdk/spdk_pid76271 00:17:22.844 Removing: /var/run/dpdk/spdk_pid76513 00:17:22.844 Removing: /var/run/dpdk/spdk_pid76642 00:17:22.844 Removing: /var/run/dpdk/spdk_pid77972 00:17:22.844 Removing: /var/run/dpdk/spdk_pid78215 00:17:22.844 Removing: /var/run/dpdk/spdk_pid78344 00:17:22.844 Removing: /var/run/dpdk/spdk_pid79685 00:17:22.844 Removing: /var/run/dpdk/spdk_pid80114 00:17:22.844 Removing: /var/run/dpdk/spdk_pid80243 00:17:22.844 Removing: /var/run/dpdk/spdk_pid81673 00:17:22.844 Removing: /var/run/dpdk/spdk_pid81916 00:17:22.844 Removing: /var/run/dpdk/spdk_pid82050 00:17:22.844 Removing: /var/run/dpdk/spdk_pid83475 00:17:23.104 Removing: /var/run/dpdk/spdk_pid83723 00:17:23.104 Removing: /var/run/dpdk/spdk_pid83852 00:17:23.104 Removing: /var/run/dpdk/spdk_pid85287 00:17:23.104 Removing: /var/run/dpdk/spdk_pid85758 00:17:23.104 Removing: /var/run/dpdk/spdk_pid85893 00:17:23.104 Removing: /var/run/dpdk/spdk_pid86020 00:17:23.104 Removing: /var/run/dpdk/spdk_pid86426 00:17:23.104 Removing: /var/run/dpdk/spdk_pid87131 00:17:23.104 Removing: /var/run/dpdk/spdk_pid87514 00:17:23.104 Removing: /var/run/dpdk/spdk_pid88191 00:17:23.104 Removing: /var/run/dpdk/spdk_pid88623 00:17:23.104 Removing: /var/run/dpdk/spdk_pid89356 00:17:23.104 Removing: /var/run/dpdk/spdk_pid89754 00:17:23.104 Removing: /var/run/dpdk/spdk_pid91662 00:17:23.104 Removing: /var/run/dpdk/spdk_pid92095 00:17:23.104 Removing: /var/run/dpdk/spdk_pid92520 00:17:23.104 Removing: /var/run/dpdk/spdk_pid94564 00:17:23.104 Removing: /var/run/dpdk/spdk_pid95033 00:17:23.104 Removing: /var/run/dpdk/spdk_pid95520 00:17:23.104 Removing: /var/run/dpdk/spdk_pid96551 00:17:23.104 Removing: /var/run/dpdk/spdk_pid96869 00:17:23.104 Removing: /var/run/dpdk/spdk_pid97786 00:17:23.104 Removing: /var/run/dpdk/spdk_pid98098 00:17:23.104 Removing: /var/run/dpdk/spdk_pid99007 00:17:23.104 Removing: /var/run/dpdk/spdk_pid99319 00:17:23.104 Removing: /var/run/dpdk/spdk_pid99989 00:17:23.104 Clean 00:17:23.104 16:30:14 -- common/autotest_common.sh@1451 -- # return 0 00:17:23.104 16:30:14 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:17:23.104 16:30:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:23.104 16:30:14 -- common/autotest_common.sh@10 -- # set +x 00:17:23.104 16:30:14 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:17:23.104 16:30:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:23.104 16:30:14 -- common/autotest_common.sh@10 -- # set +x 00:17:23.365 16:30:14 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:23.365 16:30:14 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:17:23.365 16:30:14 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:17:23.365 16:30:14 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:17:23.365 16:30:14 -- spdk/autotest.sh@394 -- # hostname 00:17:23.365 16:30:14 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:17:23.365 geninfo: WARNING: invalid characters removed from testname! 00:17:49.932 16:30:37 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:49.932 16:30:40 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:50.871 16:30:42 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:52.782 16:30:44 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:54.692 16:30:46 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:56.600 16:30:48 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:58.511 16:30:50 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:17:58.772 16:30:50 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:17:58.772 16:30:50 -- common/autotest_common.sh@1681 -- $ lcov --version 00:17:58.772 16:30:50 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:17:58.772 16:30:50 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:17:58.772 16:30:50 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:17:58.772 16:30:50 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:17:58.772 16:30:50 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:17:58.772 16:30:50 -- scripts/common.sh@336 -- $ IFS=.-: 00:17:58.772 16:30:50 -- scripts/common.sh@336 -- $ read -ra ver1 00:17:58.772 16:30:50 -- scripts/common.sh@337 -- $ IFS=.-: 00:17:58.772 16:30:50 -- scripts/common.sh@337 -- $ read -ra ver2 00:17:58.772 16:30:50 -- scripts/common.sh@338 -- $ local 'op=<' 00:17:58.772 16:30:50 -- scripts/common.sh@340 -- $ ver1_l=2 00:17:58.772 16:30:50 -- scripts/common.sh@341 -- $ ver2_l=1 00:17:58.772 16:30:50 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:17:58.772 16:30:50 -- scripts/common.sh@344 -- $ case "$op" in 00:17:58.772 16:30:50 -- scripts/common.sh@345 -- $ : 1 00:17:58.772 16:30:50 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:17:58.772 16:30:50 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.772 16:30:50 -- scripts/common.sh@365 -- $ decimal 1 00:17:58.772 16:30:50 -- scripts/common.sh@353 -- $ local d=1 00:17:58.772 16:30:50 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:17:58.772 16:30:50 -- scripts/common.sh@355 -- $ echo 1 00:17:58.772 16:30:50 -- scripts/common.sh@365 -- $ ver1[v]=1 00:17:58.772 16:30:50 -- scripts/common.sh@366 -- $ decimal 2 00:17:58.772 16:30:50 -- scripts/common.sh@353 -- $ local d=2 00:17:58.772 16:30:50 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:17:58.772 16:30:50 -- scripts/common.sh@355 -- $ echo 2 00:17:58.772 16:30:50 -- scripts/common.sh@366 -- $ ver2[v]=2 00:17:58.772 16:30:50 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:17:58.772 16:30:50 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:17:58.772 16:30:50 -- scripts/common.sh@368 -- $ return 0 00:17:58.772 16:30:50 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.772 16:30:50 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:17:58.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.772 --rc genhtml_branch_coverage=1 00:17:58.772 --rc genhtml_function_coverage=1 00:17:58.772 --rc genhtml_legend=1 00:17:58.772 --rc geninfo_all_blocks=1 00:17:58.772 --rc geninfo_unexecuted_blocks=1 00:17:58.772 00:17:58.772 ' 00:17:58.772 16:30:50 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:17:58.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.772 --rc genhtml_branch_coverage=1 00:17:58.772 --rc genhtml_function_coverage=1 00:17:58.772 --rc genhtml_legend=1 00:17:58.772 --rc geninfo_all_blocks=1 00:17:58.772 --rc geninfo_unexecuted_blocks=1 00:17:58.772 00:17:58.772 ' 00:17:58.772 16:30:50 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:17:58.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.772 --rc genhtml_branch_coverage=1 00:17:58.772 --rc genhtml_function_coverage=1 00:17:58.772 --rc genhtml_legend=1 00:17:58.772 --rc geninfo_all_blocks=1 00:17:58.772 --rc geninfo_unexecuted_blocks=1 00:17:58.772 00:17:58.772 ' 00:17:58.772 16:30:50 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:17:58.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.772 --rc genhtml_branch_coverage=1 00:17:58.772 --rc genhtml_function_coverage=1 00:17:58.772 --rc genhtml_legend=1 00:17:58.772 --rc geninfo_all_blocks=1 00:17:58.772 --rc geninfo_unexecuted_blocks=1 00:17:58.772 00:17:58.772 ' 00:17:58.772 16:30:50 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:58.772 16:30:50 -- scripts/common.sh@15 -- $ shopt -s extglob 00:17:58.772 16:30:50 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:17:58.772 16:30:50 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.772 16:30:50 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.772 16:30:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.772 16:30:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.772 16:30:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.772 16:30:50 -- paths/export.sh@5 -- $ export PATH 00:17:58.772 16:30:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.772 16:30:50 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:17:58.772 16:30:50 -- common/autobuild_common.sh@479 -- $ date +%s 00:17:58.772 16:30:50 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732811450.XXXXXX 00:17:58.772 16:30:50 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732811450.R1zBLh 00:17:58.772 16:30:50 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:17:58.772 16:30:50 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:17:58.773 16:30:50 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:17:58.773 16:30:50 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:17:58.773 16:30:50 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:17:58.773 16:30:50 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:17:58.773 16:30:50 -- common/autobuild_common.sh@495 -- $ get_config_params 00:17:58.773 16:30:50 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:17:58.773 16:30:50 -- common/autotest_common.sh@10 -- $ set +x 00:17:58.773 16:30:50 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:17:58.773 16:30:50 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:17:58.773 16:30:50 -- pm/common@17 -- $ local monitor 00:17:58.773 16:30:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:58.773 16:30:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:58.773 16:30:50 -- pm/common@25 -- $ sleep 1 00:17:58.773 16:30:50 -- pm/common@21 -- $ date +%s 00:17:58.773 16:30:50 -- pm/common@21 -- $ date +%s 00:17:58.773 16:30:50 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1732811450 00:17:58.773 16:30:50 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1732811450 00:17:58.773 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1732811450_collect-vmstat.pm.log 00:17:58.773 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1732811450_collect-cpu-load.pm.log 00:17:59.713 16:30:51 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:17:59.714 16:30:51 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:17:59.714 16:30:51 -- spdk/autopackage.sh@14 -- $ timing_finish 00:17:59.714 16:30:51 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:17:59.714 16:30:51 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:17:59.714 16:30:51 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:59.974 16:30:51 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:17:59.974 16:30:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:17:59.974 16:30:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:17:59.974 16:30:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:59.974 16:30:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:17:59.974 16:30:51 -- pm/common@44 -- $ pid=102430 00:17:59.974 16:30:51 -- pm/common@50 -- $ kill -TERM 102430 00:17:59.974 16:30:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:59.974 16:30:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:17:59.974 16:30:51 -- pm/common@44 -- $ pid=102432 00:17:59.974 16:30:51 -- pm/common@50 -- $ kill -TERM 102432 00:17:59.974 + [[ -n 6164 ]] 00:17:59.974 + sudo kill 6164 00:17:59.984 [Pipeline] } 00:17:59.998 [Pipeline] // timeout 00:18:00.003 [Pipeline] } 00:18:00.017 [Pipeline] // stage 00:18:00.024 [Pipeline] } 00:18:00.038 [Pipeline] // catchError 00:18:00.047 [Pipeline] stage 00:18:00.049 [Pipeline] { (Stop VM) 00:18:00.061 [Pipeline] sh 00:18:00.358 + vagrant halt 00:18:02.298 ==> default: Halting domain... 00:18:10.441 [Pipeline] sh 00:18:10.726 + vagrant destroy -f 00:18:13.266 ==> default: Removing domain... 00:18:13.280 [Pipeline] sh 00:18:13.566 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:18:13.577 [Pipeline] } 00:18:13.587 [Pipeline] // stage 00:18:13.590 [Pipeline] } 00:18:13.598 [Pipeline] // dir 00:18:13.601 [Pipeline] } 00:18:13.610 [Pipeline] // wrap 00:18:13.616 [Pipeline] } 00:18:13.624 [Pipeline] // catchError 00:18:13.633 [Pipeline] stage 00:18:13.634 [Pipeline] { (Epilogue) 00:18:13.643 [Pipeline] sh 00:18:13.922 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:18:18.134 [Pipeline] catchError 00:18:18.136 [Pipeline] { 00:18:18.150 [Pipeline] sh 00:18:18.468 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:18:18.468 Artifacts sizes are good 00:18:18.478 [Pipeline] } 00:18:18.493 [Pipeline] // catchError 00:18:18.505 [Pipeline] archiveArtifacts 00:18:18.513 Archiving artifacts 00:18:18.640 [Pipeline] cleanWs 00:18:18.653 [WS-CLEANUP] Deleting project workspace... 00:18:18.653 [WS-CLEANUP] Deferred wipeout is used... 00:18:18.660 [WS-CLEANUP] done 00:18:18.662 [Pipeline] } 00:18:18.677 [Pipeline] // stage 00:18:18.683 [Pipeline] } 00:18:18.697 [Pipeline] // node 00:18:18.703 [Pipeline] End of Pipeline 00:18:18.758 Finished: SUCCESS